Optimal control theory

Hello, all –

Apparently, the theory du jour in neuroscience is optimal control theory.
At least that is what Steve Scott uses, and what I see frequently in the
few other discussions I have looked at.

Here is part of a Wikipedia article on the subject. See

http://en.wikipedia.org/wiki/Optimal_control

Don’t try to follow the links – I don’t think they will work since I
just copied these segements from the article. But who knows?

(Attachment b144dc.jpg is missing)

···

==========================================================================

General method

Optimal control deals with the problem of finding a
control law for a given system such that a certain optimality criterion
is achieved. A control problem includes a
cost
functional
that is a function of state and control variables. An
optimal control is a set of differential equations describing the
paths of the control variables that minimize the cost functional. The
optimal control can be derived using

Pontryagin’s maximum principle
(a
necessary
condition
also known as Pontryagin’s minimum principle or simply
Pontryagin’s
Principle
[2]
), or by solving the

Hamilton-Jacobi-Bellman equation
(a
sufficient
condition
).
We begin with a simple example. Consider a car traveling on a
straight line through a hilly road. The question is, how should the
driver press the accelerator pedal in order to minimize the total
traveling time? Clearly in this example, the term control law refers
specifically to the way in which the driver presses the accelerator and
shifts the gears. The “system” consists of both the car and the
road, and the optimality criterion is the minimization of the total
traveling time. Control problems usually include ancillary

constraints
. For example the amount of available fuel might be
limited, the accelerator pedal cannot be pushed through the floor of the
car, speed limits, etc.

A proper cost functional is a mathematical expression giving the
traveling time as a function of the speed, geometrical considerations,
and initial conditions of the system. It is often the case that the
constraints are interchangeable with the cost functional.
Another optimal control problem is to find the way to drive the car
so as to minimize its fuel consumption, given that it must complete a
given course in a time not exceeding some amount. Yet another control
problem is to minimize the total monetary cost of completing the trip,
given assumed monetary prices for time and fuel.
A more abstract framework goes as follows. Minimize the
continuous-time cost functional
J=\Phi(\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f) + \int_{t_0}^{t

subject to the first-order dynamic
constraints.

=============================================================================

Notice right away what the objective is: it is to minimize a cost
function, given existing constraints, and not to minimize error in some
arbitrary controlled variable. Minimizing cost is a very much more
complex proposition than simply using the available facilities to make
the error as small as possible.

In fact, minimizing cost is minimizing error if the reference condition
to be achieved is zero cost. And cost might be taken to mean the result
of any operation by one control system that increases the error in
another one. Monetary cost is just one variable that might be minimized,
if the system has a limited budget and has to restrict expenditures. And
we could even generalize from there, because the auxiliary variable to be
minimized could actually be the error in another control system – say,
the difference between actual profit and desired profit. In that case,
the cost minimization might actually involve bringing profit to some
specific desired – nonzero – value.

The above excerpt shows how an analyst might derive a design for a
control system, but of course very few organisms know how to do that sort
of mathematics, or any sort, so this does not bring us closer to a model
of an organism even if this approach would work. The writers of the wiki
recognize a similar difficulty, saying

============================================================================
The disadvantage of indirect methods is that the
boundary-value problem is often extremely difficult to solve
(particularly for problems that span large time intervals or problems
with interior point constraints). A well-known software program that
implements indirect methods is
BNDSCO.
[4]

They go on to describe a more practical
approach:

===========================================================================

The approach that has risen to prominence in numerical
optimal control over the past two decades (i.e., from the 1980s to the
present) is that of so called direct methods. In a direct method,
the state and/or control are approximated using an appropriate function
approximation (e.g., polynomial approximation or piecewise constant
parameterization). Simultaneously, the cost functional is approximated as
a cost function. Then, the coefficients of the function
approximations are treated as optimization variables and the problem is
“transcribed” to a nonlinear optimization problem of the
form:
Minimize
F(\textbf{z})\,

subject to the algebraic constraints
\begin{array}{lcl} \textbf{g}(\textbf{z}) & = & \textb

Depending upon the type of direct method employed, the size of the
nonlinear optimization problem can be quite small (e.g., as in a direct
shooting or quasilinearization method) or may be quite large (e.g., a
direct collocation
method
[5]
). In the latter case (i.e., a collocation method), the
nonlinear optimization problem may be literally thousands to tens of
thousands of variables and constraints. Given the size of many NLPs
arising from a direct method, it may appear somewhat counter-intuitive
that solving the nonlinear optimization problem is easier than solving
the boundary-value problem. It is, however, the fact that the NLP is
easier to solve than the boundary-value problem.=======================================================================================

Clearly (to a mathematician, I mean), this ponderous approach will
lead to some sort of design of a control system – or to be more exact,
of a “control law” that will lead to achievement of minimum
cost, however that is defined. But once you set foot on this path there
is no leaving it, because one complexity leads to the need to deal with
another, and complex control processes remain extremely difficult to
handle.

However, the Achilles heel of this approach is to be found, I think, in
the idea of a “control law.” As I understand it, the
“control” of an optimal control system is an output which is so
shaped that when applied to a “plant” or system to be
controlled, the result will be the result that is wanted: what in PCT we
call a controlled variable is brought to a specified reference condition.

Here is a reference to the meaning of “control law:”


http://zone.ni.com/devzone/cda/tut/p/id/8156

========================================================================
A control law is a set of rules that are used to
determine the commands to be sent to a system based on the desired state
of the system. Control laws are used to dictate how a robot moves within
its environment, by sending commands to an actuator(s). The goal is
usually to follow a pre-defined trajectory which is given as the robot’s
position or velocity profile as a function of time. The control law can
be described as either open-loop control or closed-loop (feedback)
control. =============================================================================

The way this relates to closed-loop control is described this
way:

=============================================================================

A closed-loop (feedback) controller uses the information
gathered from the robot’s sensors to determine the commands to send to
the actuator(s). It compares the actual state of the robot with the
desired state and adjusts the control commands accordingly, which is
illustrated by the control block diagram below. This is a more robust
method of control for mobile robots since it allows the robot to adapt to
any changes in its environment.


[]

==============================================================================

You can see that they are getting closer, but this is only an illusion.
As shown, this system can’t “adapt to changes in its
environment,” but ff we now think about reorganization, or
“adaptive control”, we find this:

=============================================================================

Emacs!


[]

Figure 6. An adaptive control system implemented in LabVIEW

===========================================================================

Now we get to the nitty-gritty. Fig. 6 shows what has to go into this
control system model. Note the plant simulation in the middle of it. Note
the “adaptive algorithm” Note the lack of any inputs to
the plant from unpredicted or unpredictable disturbances. And note the
lack of any indication of where reference signals come from. An engineer
building a device in a laboratory doesn’t have to be concerned about such
things, but an organism does. Clearly, for this diagram to represent a
living control system, it will need a lot of help from a protective
laboratory full of helpful equipment, and a library full of data about
physics, chemistry, and laws of nature – the same things the engineer
will use in bringing the control system to life. The engineer is going to
have to do the “system identification” first, which is where
the internal model of the plant comes from – note that the process by
which that model is initially created is not shown.

I’m not saying that this approach won’t work. With a lot of help, it
will, because engineers can solve problems and they won’t quit until they
succeed.

But organisms in general have no library of data or information about
natural laws or protection against disturbances or helpful engineers
standing by or – in most cases – any understanding of how the world
works or any ability to carry out mathematical analyses. This simply
can’t be a diagram of how living control systems work.

The PCT model is specifically about how organisms work. It actually
accomplishes the same ends that the above approach accomplishes, but it
does so in more direct and far simpler ways commensurate with the
capabilities of even an amoeba, and it does not require the control
system to do any elaborate mathematical analysis. It doesn’t have to make
any predictions (except the metaphorical kind of predictions generally
preceded by “You could say that …”). The PCT model constructs
no model of the external world or itself. It does not have to know why
controlled variables occasionally start to deviate from their reference
conditions. It does not need to know how its actions affect the outside
world. When it adapts, it does not do so by figuring out what needs to be
changed and then changing it.

This is not to say that the PCT model has reached perfection or handles
every detail correctly. Nor is it to say that there is nothing in optimal
control theory of any use to a theoretician trying to explain the
behavior of organisms. What I am saying is that PCT provide a far simpler
way of accounting for behavior than the current forms of optimal control
theory seem to provide, and as far as I know can predict behavior at
least as well if not better.

Optimal control theory seems to be a description of how an engineer armed
with certain mathematical tools might go about designing a control system
given the required resources such as fast computers, accurate sensors,
and well-calibrated actuators, in a world where no large unexpected
disturbances occur, or where help is available if they do.

I always think of the robot called Dante, which was sent down into
a dormant volcano with a remote mainframe analyzing its video pictures
for obstacles and calculating where to place (with excruciating slowness)
each of its hexapodal feet, and ended up on its back being hoisted out by
a helicopter.

[
http://www.newscientist.com/article/mg14319390.400-dante-rescued-from-crater-hell-.html

](http://www.newscientist.com/article/mg14319390.400-dante-rescued-from-crater-hell-.html) It stepped on a rock which rolled instead of holding firm. That
kind of robotic design is simply not suited for the real world. As Dante
showed, it is no way to minimize costs. Oh, what we could do with the
money they spent on that demonstration!

Best,

Bill P.

[From Richard Kennaway (2011.10.31.1448 GMT)]

As not everyone may have subscriptions to all the journals involved, here are URLs to freely available copies of the papers that Stephen Scott mentioned:

Todorov and Jordan (2002) Nature Neuroscience 5:1226-1235.

"Optimal feedback control as a theory of motor coordination"
http://www.cs.washington.edu/homes/todorov/papers/coordination.pdf
Supplemental info at
http://www.nature.com/neuro/journal/v5/n11/extref/nn963-S1.pdf

Todorov (2004) Nature Neuroscience 7:907-915.

"Optimality principles in sensorimotor control"
http://www.cs.washington.edu/homes/todorov/papers/optimality_review.pdf

Scott (2004) Nature Reviews Neuroscience 5:532-546.

"Optimal feedback control and the neural basis of volitional motor control"
http://limb.biomed.queensu.ca/publications/optimal_feedback_control_and_the_neural_basis.pdf

Scott (2008) J. Physiol. 586.5:1217-1224.

"Inconvenient Truths about neural processing in primary motor cortex"
http://www.ece.cmu.edu/~mgolub/nmc/ScottJPhysiol2007.pdf

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

Hi, Richard --

You are so great, Richard. Thanks for coming up with those links just when I was wondering how on earth I was going to get hold of those publications. In my little double-wide trailer in the trailer park, it's a long way to an academic library. I guess I could enlist the aid of the guys at CU 15 miles or so from here, but you have given me a nice short-cut that I can use right away.

While I have you on the line, I have another enormous favor to ask of you. Somehow I got Delphi XE loaded with the GLScene components, and have even compiled and run the RagDoll simulation example with full physical dynamics, but I'm so slow at this stuff that I haven't got anywhere further with it. As you know, for a long time I've wanted a basic physical model of the human body, with masses, moments of inertia, and detection of effects of external forces (and I guess collisions), so I could start trying to develop control systems to run the thing. Make it sit up, roll over, crawl, grab things to pull it up on its knees, then its feet. Make it walk. Then run.
Tell it not to eat that apple. I could at least get started on that project, break the ice, so others could carry on with it.

I know you want to do this with a real robot, but I assume you also agree that we'll have to start with simulations in any event. What I need is a simulation package that will receive signals specifying muscle tensions (or torques) affecting each degree of freedom down to the fingers (like you did for that ASL simulator), returning outputs which are the behavior of all the joint angles and movable, twistable, or bendable body segments.

I realize, of course, that you have a life and are not there to provide me with goodies, but if, over the next year or so, you could at least look into this and see how feasible it is, or even just give me some tips on how to develop this model myself, I would be grateful for some commensurate length of time. And I think you as well as others could use the product, too.

Oh, and I assume you take it for granted that I will yell for help if the math in those publications gets beyond me, which is likely.

Best,

Bill

And I'm in a fifth wheel in an RV resort in Tucson so I appreciate them too.

Regards,

Fred Nickols
Managing Partner
Distance Consulting LLC
Home to "Solution Engineering"
1558 Coshocton Ave - Suite 303
Mount Vernon, OH 43050
www.nickols.us | fred@nickols.us

"We Engineer Solutions to Performance Problems"

From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill Powers
Sent: Monday, October 31, 2011 8:35 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Optimal control theory

Hi, Richard --

You are so great, Richard. Thanks for coming up with those links just when

I

was wondering how on earth I was going to get hold of those publications.

In

my little double-wide trailer in the trailer park, it's a long way to an

academic

library. I guess I could enlist the aid of the guys at CU 15 miles or so

from

here, but you have given me a nice short-cut that I can use right away.

While I have you on the line, I have another enormous favor to ask of you.
Somehow I got Delphi XE loaded with the GLScene components, and have
even compiled and run the RagDoll simulation example with full physical
dynamics, but I'm so slow at this stuff that I haven't got anywhere

further

with it. As you know, for a long time I've wanted a basic physical model

of the

human body, with masses, moments of inertia, and detection of effects of
external forces (and I guess collisions), so I could start trying to

develop

control systems to run the thing. Make it sit up, roll over, crawl, grab

things to

pull it up on its knees, then its feet. Make it walk. Then run.
Tell it not to eat that apple. I could at least get started on that

project, break

the ice, so others could carry on with it.

I know you want to do this with a real robot, but I assume you also agree

that

we'll have to start with simulations in any event. What I need is a

simulation

package that will receive signals specifying muscle tensions (or torques)
affecting each degree of freedom down to the fingers (like you did for

that

ASL simulator), returning outputs which are the behavior of all the joint
angles and movable, twistable, or bendable body segments.

I realize, of course, that you have a life and are not there to provide me

with

goodies, but if, over the next year or so, you could at least look into

this and

see how feasible it is, or even just give me some tips on how to develop

this

model myself, I would be grateful for some commensurate length of time.
And I think you as well as others could use the product, too.

Oh, and I assume you take it for granted that I will yell for help if the

math in

···

-----Original Message-----
those publications gets beyond me, which is likely.

Best,

Bill

Hi, Stephen –

OK, I have all those papers stashed away on my computer so we’re in
business.

Just a brief glance confirms my initial impressions, but I shall try to
be extremely tactful and humble about this, even though I am going to
tell you that we solved all those problems long ago, starting in 1960.
What we did was to come up with a completely different architecture that
greatly simplifies the tasks, as well as shifting the emphasis from
generating output to controlling input. You’ll see. I know this is is
going to be conceptually difficult, but if you’re willing to consider a
next-generation model of motor control, I don’t think you will be sorry
you did.

I have to get ready now to go to CU (University of Colorado, Boulder),
where with Lew Harvey (chair of dept. of psychology and neuroscience) and
some other people we will start getting next year’s meeting of the
control systems group organized. I’m hoping that the University will help
organize it and make it a fairly major event – according to Lew Harvey,
CU might even consider sponsoring the attendance of some of the people
from China who have joined us in spirit if not in body. Oops, I forgot
that today is Halloween. Don’t take me literally.

I hope that you and some of your colleagues will consider attending, even
if you don’t adopt PCT as your new testament. The meeting will be in
July, probably from the 25th through the 28th with the 29th as departure
day. I am going to suggest a workshop preceding, on the 24th or perhaps
for two days, discussing PCT and psychotherapy, which ought to drag some
people from Manchester (UK) and Australia over here t o help run it, as
well as bringing in clinical psychologists from CU. You would probably
get moral support from the CU neuroscientists, who are still pretty
skeptical but willing to listen. But by then I’m sure everyone will be on
the same page.

Mark your calendars and start looking for money, everyone.

Best,

Bill P.

···

At 10:34 AM 10/31/2011 -0400, Stephen Scott wrote:

While the general principles
outlined in Bill’s e-mail cover the topic of optimal control, I would
suggest that if people are really interested to understand how these
ideas relate to the voluntary motor system they will need to read some
papers on this topic. First and foremost you must read the original
articles of Emo Todorov.

Todorov and Jordan (2002) Nature Neuroscience 5:1226-1235.

Todorov (2004) Nature Neuroscience 7:907-915.

I’ve written an article to think about neural implementation of the ideas
in:

Scott (2004) Nature Reviews Neuroscience 5:532-546.

Relatively simple history of Primary Motor Cortex and Different Theories
have shaped the experiments performed over the years.

Scott (2008) J. Physiol. 586.5:1217-1224.

A few key points

OFC is a normative model that describes the best possible solution given
the behavioural goal, noise and properties of the system. The
mathematics of stochastic optimal feedback control (perhaps the best
title for the version used for the motor system) are certainly not
implemented by the brain. Motor learning becomes an interesting issue
that is being considered by a few labs. Some deviations between between
and nominal predictions of OFC are starting to be observed and provide
useful insight on neural implementation. I actually like the separation
of broad theory on motor control from neural implementation. Too often
people get lost in the details and forget what the system was supposed to
do in the first place.

One of my pet peeves has been the continued search for the variables
coded in the activity in primary motor cortex. After 45 years (Evarts
started this quest in mid-1960s) all we’ve learned is that you can find
any and every variable. Unfortunately, there is no holy grail.
Optimal feedback control highlighted to me the need to focus on the
behavioural objective and how that influences control (and neural
processing).

Cheers,

Steve

On 30-Oct-11, at 2:34 PM, Bill Powers wrote:

Hello, all –

Apparently, the theory du jour in neuroscience is optimal control theory.
At least that is what Steve Scott uses, and what I see frequently in the
few other discussions I have looked at.

Here is part of a Wikipedia article on the subject. See

http://en.wikipedia.org/wiki/Optimal_control

Don’t try to follow the links – I don’t think they will work since I
just copied these segements from the article. But who knows?

==========================================================================

General method

Optimal control deals with the problem of finding a
control law for a given system such that a certain optimality criterion
is achieved. A control problem includes a
cost
functional
that is a function of state and control variables. An
optimal control is a set of differential equations describing the
paths of the control variables that minimize the cost functional. The
optimal control can be derived using

Pontryagin’s maximum principle
(a
necessary
condition
also known as Pontryagin’s minimum principle or simply
Pontryagin’s
Principle
[2]),
or by solving the

Hamilton-Jacobi-Bellman equation
(a
sufficient
condition
).We begin with a simple example. Consider a car traveling on a
straight line through a hilly road. The question is, how should the
driver press the accelerator pedal in order to minimize the total
traveling time? Clearly in this example, the term control law refers
specifically to the way in which the driver presses the accelerator and
shifts the gears. The “system” consists of both the car and the
road, and the optimality criterion is the minimization of the total
traveling time. Control problems usually include ancillary

constraints
. For example the amount of available fuel might be
limited, the accelerator pedal cannot be pushed through the floor of the
car, speed limits, etc.
A proper cost functional is a mathematical expression giving the
traveling time as a function of the speed, geometrical considerations,
and initial conditions of the system. It is often the case that the
constraints are interchangeable with the cost functional.
Another optimal control problem is to find the way to drive the car
so as to minimize its fuel consumption, given that it must complete a
given course in a time not exceeding some amount. Yet another control
problem is to minimize the total monetary cost of completing the trip,
given assumed monetary prices for time and fuel.
A more abstract framework goes as follows. Minimize the
continuous-time cost functional
J=\Phi(\textbf{x}(t_0),t_0,\textbf{x}(t_f),t_f) + \int_{t_0}^{t

subject to the first-order dynamic
constraints.

=============================================================================

Notice right away what the objective is: it is to minimize a cost
function, given existing constraints, and not to minimize error in some
arbitrary controlled variable. Minimizing cost is a very much more
complex proposition than simply using the available facilities to make
the error as small as possible.

In fact, minimizing cost is minimizing error if the reference condition
to be achieved is zero cost. And cost might be taken to mean the result
of any operation by one control system that increases the error in
another one. Monetary cost is just one variable that might be minimized,
if the system has a limited budget and has to restrict expenditures. And
we could even generalize from there, because the auxiliary variable to be
minimized could actually be the error in another control system – say,
the difference between actual profit and desired profit. In that case,
the cost minimization might actually involve bringing profit to some
specific desired – nonzero – value.

The above excerpt shows how an analyst might derive a design for a
control system, but of course very few organisms know how to do that sort
of mathematics, or any sort, so this does not bring us closer to a model
of an organism even if this approach would work. The writers of the wiki
recognize a similar difficulty, saying

============================================================================
The disadvantage of indirect methods is that the
boundary-value problem is often extremely difficult to solve
(particularly for problems that span large time intervals or problems
with interior point constraints). A well-known software program that
implements indirect methods is
BNDSCO.
[4]

They go on to describe a more practical
approach:

===========================================================================

The approach that has risen to prominence in numerical
optimal control over the past two decades (i.e., from the 1980s to the
present) is that of so called direct methods. In a direct method,
the state and/or control are approximated using an appropriate function
approximation (e.g., polynomial approximation or piecewise constant
parameterization). Simultaneously, the cost functional is approximated as
a cost function. Then, the coefficients of the function
approximations are treated as optimization variables and the problem is
“transcribed” to a nonlinear optimization problem of the form:Minimize
F(\textbf{z})\,

subject to the algebraic constraints
\begin{array}{lcl} \textbf{g}(\textbf{z}) & = & \textb

Depending upon the type of direct method employed, the size of the
nonlinear optimization problem can be quite small (e.g., as in a direct
shooting or quasilinearization method) or may be quite large (e.g., a
direct collocation
method

[5]
). In the latter case (i.e., a collocation method), the
nonlinear optimization problem may be literally thousands to tens of
thousands of variables and constraints. Given the size of many NLPs
arising from a direct method, it may appear somewhat counter-intuitive
that solving the nonlinear optimization problem is easier than solving
the boundary-value problem. It is, however, the fact that the NLP is
easier to solve than the boundary-value problem.=======================================================================================

Clearly (to a mathematician, I mean), this ponderous approach will
lead to some sort of design of a control system – or to be more exact,
of a “control law” that will lead to achievement of minimum
cost, however that is defined. But once you set foot on this path there
is no leaving it, because one complexity leads to the need to deal with
another, and complex control processes remain extremely difficult to
handle.

However, the Achilles heel of this approach is to be found, I think, in
the idea of a “control law.” As I understand it, the
“control” of an optimal control system is an output which is so
shaped that when applied to a “plant” or system to be
controlled, the result will be the result that is wanted: what in PCT we
call a controlled variable is brought to a specified reference condition.

Here is a reference to the meaning of “control law:”


http://zone.ni.com/devzone/cda/tut/p/id/8156

========================================================================
A control law is a set of rules that are used to
determine the commands to be sent to a system based on the desired state
of the system. Control laws are used to dictate how a robot moves within
its environment, by sending commands to an actuator(s). The goal is
usually to follow a pre-defined trajectory which is given as the robot’s
position or velocity profile as a function of time. The control law can
be described as either open-loop control or closed-loop (feedback)
control. =============================================================================

The way this relates to closed-loop control is described this
way:

=============================================================================

A closed-loop (feedback) controller uses the information
gathered from the robot’s sensors to determine the commands to send to
the actuator(s). It compares the actual state of the robot with the
desired state and adjusts the control commands accordingly, which is
illustrated by the control block diagram below. This is a more robust
method of control for mobile robots since it allows the robot to adapt to
any changes in its environment.


[]

==============================================================================

You can see that they are getting closer, but this is only an illusion.
As shown, this system can’t “adapt to changes in its
environment,” but ff we now think about reorganization, or
“adaptive control”, we find this:

=============================================================================

<b144dc.jpg>


[]

Figure 6. An adaptive control system implemented in LabVIEW

===========================================================================

Now we get to the nitty-gritty. Fig. 6 shows what has to go into this
control system model. Note the plant simulation in the middle of it. Note
the “adaptive algorithm” Note the lack of any inputs to
the plant from unpredicted or unpredictable disturbances. And note the
lack of any indication of where reference signals come from. An engineer
building a device in a laboratory doesn’t have to be concerned about such
things, but an organism does. Clearly, for this diagram to represent a
living control system, it will need a lot of help from a protective
laboratory full of helpful equipment, and a library full of data about
physics, chemistry, and laws of nature – the same things the engineer
will use in bringing the control system to life. The engineer is going to
have to do the “system identification” first, which is where
the internal model of the plant comes from – note that the process by
which that model is initially created is not shown.

I’m not saying that this approach won’t work. With a lot of help, it
will, because engineers can solve problems and they won’t quit until they
succeed.

But organisms in general have no library of data or information about
natural laws or protection against disturbances or helpful engineers
standing by or – in most cases – any understanding of how the world
works or any ability to carry out mathematical analyses. This simply
can’t be a diagram of how living control systems work.

The PCT model is specifically about how organisms work. It actually
accomplishes the same ends that the above approach accomplishes, but it
does so in more direct and far simpler ways commensurate with the
capabilities of even an amoeba, and it does not require the control
system to do any elaborate mathematical analysis. It doesn’t have to make
any predictions (except the metaphorical kind of predictions generally
preceded by “You could say that …”). The PCT model constructs
no model of the external world or itself. It does not have to know why
controlled variables occasionally start to deviate from their reference
conditions. It does not need to know how its actions affect the outside
world. When it adapts, it does not do so by figuring out what needs to be
changed and then changing it.

This is not to say that the PCT model has reached perfection or handles
every detail correctly. Nor is it to say that there is nothing in optimal
control theory of any use to a theoretician trying to explain the
behavior of organisms. What I am saying is that PCT provide a far simpler
way of accounting for behavior than the current forms of optimal control
theory seem to provide, and as far as I know can predict behavior at
least as well if not better.

Optimal control theory seems to be a description of how an engineer armed
with certain mathematical tools might go about designing a control system
given the required resources such as fast computers, accurate sensors,
and well-calibrated actuators, in a world where no large unexpected
disturbances occur, or where help is available if they do.

I always think of the robot called Dante, which was sent down into
a dormant volcano with a remote mainframe analyzing its video pictures
for obstacles and calculating where to place (with excruciating slowness)
each of its hexapodal feet, and ended up on its back being hoisted out by
a helicopter.


http://www.newscientist.com/article/mg14319390.400-dante-rescued-from-crater-hell-.html

It stepped on a rock which rolled instead of holding firm. That
kind of robotic design is simply not suited for the real world. As Dante
showed, it is no way to minimize costs. Oh, what we could do with the
money they spent on that demonstration!

Best,

Bill P.

Dr. Stephen Scott, Ph.D., Professor

Centre for Neuroscience Studies

Dept. of Biomedical and Molecular Sciences

Dept. of Medicine

Queen’s University

Kingston, ON

K7L 3N6

Hello, all --

Hi Bill,
Todorov can be forgiven for not getting things right. Believe it or not, the modern motor control stuff is an improvement over the traditional motor control work.

Yes, I can see that now. Thanks for this big-picture post, just what we need to be thinking about. You're very good at that. It's good to find a place in the flow of science, which lessens the pain a bit on both sides of arguments.

Best,

Bill P.

P.S. note expanded CC list.

···

At 01:56 PM 10/31/2011 -0400, Henry Yin wrote:

Hi, Steve --

SS: Perhaps I don't understand what you mean at the end of your last paragraph regarding the inverse kinematics problem. This issue has been recognized for a very long time or I don't understand what you mean?

BP: I'm probably the one to answer this question, though Henry will have useful things to say.

In order to construct a "control" in OCT terms, it's necessary for the control system to find the inverse of the kinematic relationships relating joint angles to limb configurations such as hand position in space. That is unnecessary in a negative feedback control model like PCT. Similarly, in the OCT approach it's necessary for the control system to compute the inverse dynamics so the right temporal patterns of commands for producing a desired final effect can be issued to the muscles. Again, this is unneccesary in the negative feedback control system model we use in PCT. And in OCT models, a lot of "prediction" seems necessary, though it's not used in a PCT model.

Is that what you mean when you say that this issue has been recognized for a very long time? My impression so far, possibly due to sketchy reading, is that this computational problem has been recognized, but its irrelevance to control has not been recognized. When we refer to these "issues" in PCT, we mean the issue that these inverse calculations are still part of a model of behavior when they are really a problem, not a solution. A system that works by computing inverse kinematics and/or dynamics can't resist unexpected disturbances or compensate for unsensed changes in its own output systems. Negative feedback control systems (and organisms) can. And negative feedback control is orders of magnitude simpler than the baroque schemes I'm seeing in the current batch of papers.

Some of our simulations show control systems operating several joints to produce smooth movements of a hand that tracks an arbitrarily moving target in three dimensions using simulated visual feedback. These models perform no inverse calculations and do no predicting -- not even the model of an arm and hand moving in three dimensions while 14 degrees of freedom are brought under control by "reorganization," the adaptation algorithm of PCT.

I could be quite mistaken, but the impression I'm getting from the things I'm reading now is that workers in OCT are dismissing classical negative feedback control models only because they are unfamiliar with the properties of such systems. Rather than explaining why they do so, they cite other people instead of offering their own analysis, as if that issue was settled long ago by experts and doesn't need reopening for discussion. One result of relying on unverified opinions has been the myth that transport lags in a negative feedback loop necessarily create instability that makes negative feedback inappropriate for a motor control model. We have been incorporating transport lags into models of tracking behavior for many years, and when one understands the simple methods for preventing instability there are no problems at all, even with very high loop gains that allow very accurate control. In the Live Block Diagram demo to which I referred you some time ago, there is an adjustable transport lag with a default value of 133 milliseconds, the lag actually measured in a tracking experiment in another demo, and the loop gain is 100, meaning that 99% of the effect of a steady disturbance is counteracted. A step disturbance is corrected in one smooth motion with that precision after a delay of two to three times the transport lag. And when this same model is used to simulate the human performance in the tracking experiment, 96% or more of the variance of mouse position is accounted for. So it's very clear that transport lags are not a problem in models of real motor control. Only reliance on mistaken opinions from long ago could explain why anyone still thinks they are a problem.

I also have some evidence that the lags are really integral lags, not transport lags.

The idea of negative feedback control met with opposition almost the instant it was invented, even before Wiener showed up with cybernetics. I could understand that at first, because scientists have to be skeptical of an idea that threatens radical changes in our understanding of something. I was 26 years old when I heard about cybernetics and learned about the connection between servomechanisms and human behavior, and was all excited and looked forward to big changes in the life sciences when others saw how this new principle works. I'm still looking forward to it, because it sure hasn't happened yet. Though maybe it may have started to happen.

Steve, I think there is going to be a battle between OCT and PCT. And I think the best way to manage it is to go back to the basic principles of science and do real experiments to test models. This battle won't happen between people but inside them; it's an internal battle that just about everone in PCT has had to go through just because PCT is such a radical departure from tradition.

Mark Twain or Will Rogers probably said it: "It's not what we don't know that's the problem, it's what we know that ain't so." There seem to be a lot of things people know about negative feedback control that ain't so. This means that when we say something about it pro or con, or about any other theory, we can't just assume that what we say is right. We have to try to demonstrate that it's right.

So, what experiments/demonstrations would you recommend? I can ask you how OCT would be used to explain the phenomena in my demos, and I could ask you the same thing about yours. This will get us somewhere a lot faster than just arguing.

Best,

Bill

···

At 10:09 PM 11/2/2011 -0400, Stephen Scott wrote:

Hi, Tim –

TC: I can’t see how a battle could occur between OCT and PCT … one
is trying to model the neural organisation of behaviour and the other is
trying to do something else. I could see a battle ensuing between
different theories explaining the same phenomenon differently but that
doesn’t seem to be the case here.

BP: Both purport to be theories of how organisms control things. In
various areas they overlap. But they explain the phenomenon of control by
organisms in very different ways. Only one of those ways can be right, in
the regards where they differ. We ought to be able to find out which way
that is in each specific case. I’m willing to put PCT to the test. Some
people in this group who are knowledgeable about OCT may be willing to
put their theory to the test, too. There could be features of both
theories that we don’t know how to test, but the features we can test
might help in making the decision about which way to go with the
others.

It seems to me that this is the only way to choose between theories, the
only way that can bypass problems of previous committments and beliefs,
arriving at a conclusion with a minimum of hidden agendas. At the highest
level we all want to understand behavior and experience correctly,
regardless of the ups and downs of getting to that understanding. We all
want to make the world a better place to live in for everyone; if there
is a hope of doing that, all other subgoals become secondary and
temporary.

I don’t care if PCT loses the battle with OCT (and many other similar
battles), as long as the outcome is that we all understand the human
condition, with confidence, in significantly better ways. That would make
up for any disappointments that come out of this effort.

So what is the highest goal here? Is it to be right, or to know the
truth?

Best,

Bill

[From Rick Marken (2011.11.03.1200)]

Hi, Tim --

TC: I can't see how a battle could occur between OCT and PCT ...

BP: Both purport to be theories of how organisms control things. In various
areas they overlap. But they explain the phenomenon of control by organisms
in very different ways. Only one of those ways can be right, in the regards
where they differ. We ought to be able to find out which way that is in each
specific case. I'm willing to put PCT to the test. Some people in this group
who are knowledgeable about OCT may be willing to put their theory to the
test, too. There could be features of both theories that we don't know how
to test, but the features we can test might help in making the decision
about which way to go with the others.

It seems to me that this is the only way to choose between theories

I agree. It seems like there are several quick little studies that
could provide critical tests of PCT vs OCT. One would be a tracking
experiment where there is both a time varying, low pass filtered
disturbance to the controlled variable as well as a different time
varying, low pass filtered change in the feedback connection (the
value of a linear multiplier) in a tracking task. It seems like OCT
would have a very rough time with that one whereas for PCT it's a
piece of cake.

Also, it would be interesting to see if OCT could produce a model of
the manual control task developed by Mechsner (and reported in
_Nature_). It might be able to handle the task as is but I think it
would really have a tough time if slowly varying disturbances were
added to the handle turn ratios. Again, for PCT it would be a piece of
cake; for OCT, maybe not so much.

But I think the tracking task with varying disturbance and feedback
function might be the best place to start.

Best

Rick

···

On Thu, Nov 3, 2011 at 9:08 AM, Bill Powers <powers_w@frontier.net> wrote:

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2011.11.03.1215)]

Rick Marken (2011.11.03.1200)--

Also, it would be interesting to see if OCT could produce a model of
the manual control task developed by Mechsner (and reported in
_Nature_).

By the way, that task (and the PCT model of it) is described here:
http://www.mindreadings.com/Coordination.html

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Hi, Tim --

TC: Hmmm. If you knew 'the truth' wouldn't that also mean you were right?

BP: Not if you started out believing something that was disproven on the way to discovering the truth. My question was, paraphrased, are you willing to search for and accept the truth even if that means you may have to admit you were wrong?

TC: That aside, I'm surprised at you're suggestion that both theories purport to be theories about how organisms control things. I get that both theories have something to do with control but given that OCT is accepted by the people who study it (at least one of those anyway but he's published in Nature so that's a pretty good 'one' to use as a benchmark) _not_ to be a description of how the brain works, how can it be a theory about the 'how' of control?

BP: There are many different ways to implement any given function in neurons. One way to represent a position in two-dimensional space is to have one neuron fire at a rate representing the X direction of measurement and a second neuron fire at a rate representing the Y direction. That's how the current PCT models would do it. Another way is to have a single neuron in a specific place in a two-dimensional map fire at some rate. Now the X-Y position is represented by a single neuron's location in the neural map. The latter is known to be the case in visual areas of the brain, and also "somatatopic" areas. So we know that the brain, in at least some areas, doesn't work the way the PCT model works, though both ways of representing position might be able to carry out the same function in a control system. The only problem is that I can't imagine how that would be done under the mapping scheme. Some day, of course, we will understand how but right now it's still a mystery to me. Martin Taylor thinks we can just ignore that problem, but I don't.

I think you may be making too much of Stephen's observation that the OCT model doesn't work the way the brain works. I think he means only that while certain functions are proposed to be carried out by the brain, they are probably not implemented in the same mathematical way an analyst would proceed.

TC: I don't know that this battle is going to get you any closer either to the truth or being right but it'll be great to see the experiments you come up with. More good stuff for PCT.

BP: Well, we shall see. It could be that PCT has some mistakes in it that OCT could correct. If we're not willing to accept that possibility, we might as well not do the testing, because that would say we don't want to know the truth if it's not what we already believe.

Best,

Bill

···

At 01:15 PM 11/3/2011 -0700, Tim Carey wrote:

Folks,

I am utterly fascinated by this entire dialog for many reasons to include those that have nothing to do with PCT or OCT rather the created human system/ interface that is growing from this forum (actually, maybe that human system/interface development is part of OCT and or PCT in one way or another). Not my point….

I have a question that on the surface may appear very elementary for many of you….

Behind this action called “Control,â€? aren’t some types of “variablesâ€? in play here? Of course. But, has PCT and or OCT actually defined what a “variableâ€? actually is and if so, is this coming from a theoretical definition or a scientific definition? I understand that some dictionaries define a “Variableâ€? as:

 var·i·a·ble  /ˈvɛəriəbəl/ Show Spelled[vair-ee-uh-buhl] Show IPAadjective

1.aptorliable to vary or change; changeable: variable weather; variable moods. Â

2.capable of being varied or changed; alterable: a variable time limit for completion of a book. Â

3.in constant; fickle: a variable lover.

4.having much variation or diversity. Â

5.Biology. deviating from the usual type, as a species or a specific character.

So many dictionaries claim a “Variableâ€? being one thing or another but are there any academic papers written specifically about “Variablesâ€? that would support this process we know as “Controlling?â€? (Variable centric and not necessarily Control centric type of papers)……Papers that really dissect varriables is what I am looking for here.

Lastly, I would love for all of you to really rip the below apart. Don’t worry, you won’t upset me or hurt my feelings. I wrote the below knowing it is flawed. This is why I am asking the above questions.

“Variables come in all forms. If something, anything, has the ability to change or vary, it is construed as a variable. Pretty much anything and everything can be construed as a variable ranging from humans to terrain, water to stars, temperature to feelings, energy to matter, etc. The universe which we live in is filled with variables. They can easily be construed as the pieces which make us who we are as beings. Like it or not, variables can arguably be construed as the single universal entity which actually controls us as cognitively induced species. Arguably, variables induce perceptions and perception influences thought and our thoughts promote our actions.â€?

Kerry PattonÂ

···

On Thu, Nov 3, 2011 at 9:01 PM, Bill Powers powers_w@frontier.net wrote:

Hi, Tim –

At 01:15 PM 11/3/2011 -0700, Tim Carey wrote:

TC: Hmmm. If you knew ‘the truth’ wouldn’t that also mean you were right?

BP: Not if you started out believing something that was disproven on the way to discovering the truth. My question was, paraphrased, are you willing to search for and accept the truth even if that means you may have to admit you were wrong?

TC: That aside, I’m surprised at you’re suggestion that both theories purport to be theories about how organisms control things. I get that both theories have something to do with control but given that OCT is accepted by the people who study it (at least one of those anyway but he’s published in Nature so that’s a pretty good ‘one’ to use as a benchmark) not to be a description of how the brain works, how can it be a theory about the ‘how’ of control?

BP: There are many different ways to implement any given function in neurons. One way to represent a position in two-dimensional space is to have one neuron fire at a rate representing the X direction of measurement and a second neuron fire at a rate representing the Y direction. That’s how the current PCT models would do it. Another way is to have a single neuron in a specific place in a two-dimensional map fire at some rate. Now the X-Y position is represented by a single neuron’s location in the neural map. The latter is known to be the case in visual areas of the brain, and also “somatatopic” areas. So we know that the brain, in at least some areas, doesn’t work the way the PCT model works, though both ways of representing position might be able to carry out the same function in a control system. The only problem is that I can’t imagine how that would be done under the mapping scheme. Some day, of course, we will understand how but right now it’s still a mystery to me. Martin Taylor thinks we can just ignore that problem, but I don’t.

I think you may be making too much of Stephen’s observation that the OCT model doesn’t work the way the brain works. I think he means only that while certain functions are proposed to be carried out by the brain, they are probably not implemented in the same mathematical way an analyst would proceed.

TC: I don’t know that this battle is going to get you any closer either to the truth or being right but it’ll be great to see the experiments you come up with. More good stuff for PCT.

BP: Well, we shall see. It could be that PCT has some mistakes in it that OCT could correct. If we’re not willing to accept that possibility, we might as well not do the testing, because that would say we don’t want to know the truth if it’s not what we already believe.

Best,

Bill


Kerry Patton

Professor

Henley-Putnam University

(570)278-3618

[From Erling Jorgensen (2011.11.04 10:50 EDT)]

Kerry Patton (Fri, 4 Nov 2011 07:44:30 -0400)

Hello Kerry,

Let me respond to what you raise about "variables," just as you have
asked.

Variables come in all forms. If something, anything, has the ability to
change or vary, it is construed as a variable.

So, you are making changeability a basic property of the (perceived)
universe. It seems to me that the whole idea of different layers of
perception, which in HPCT are proposed to be linked hierarchically, is
that various _invariances_ are constructed with each new type of
perception. Yes, they then vary, but they are subsequently brought back
to stabilized values, by the operation of the organism. That is the
process we call control.

Like it or not, variables can arguably be construed as the single
universal entity which actually controls us as cognitively induced
species.

I start with a somewhat skeptical eye when it comes to universal dictums.
When we try to say everything in a condensed form, sometimes it turns
out that it doesn't really lead us forward. I cannot tell whether it
is profound or trivial, to say that everything changes. One way to test
a dictum is to ask, "How could it be otherwise?"

I would actually expand on your formulation, borrowing from Gregory
Bateson (who maybe borrowed it from somebody else). I would say that
what matters are "differences that make a difference." I think that gets
us closer to the actual import of changes for living control systems.

If we primarily focus on the 'change' aspect of variables, I think that
misses the profound insight raised when attention is drawn to what does
_not_ change when it should be changing. It is _stabilized_ variables
that are the real puzzle in the universe.

If you have been around the literature of PCT for a while, you might
recognize the criterion of 'lack of change when there should be change'
as lying behind the mathematical implementation of a Stability Factor (I
think that's what it's called), in simulations such as Rick Marken's
Mind Reading demo, which is really quite a persuasive demonstration of
principle.

Arguably, variables induce perceptions and perception influences thought
and our thoughts promote our actions.

For some reason, you stopped short. You cut the loop at what we in CSG
would say is a crucial omission. Our actions then induce our perceptions!
That is the pivotal insight behind seeing the effects of circular
causality implemented as negative feedback loops.

And when that last part of the loop is included, then our actions have
the opportunity for counteracting the effects of those initial (or
initiating) variables. We are no longer as subject to the whims or
vagaries of the external world. If some difference makes a difference to
us, then we can often stabilize it (as assessed by our perception of it)
at a value that works best for us.

I don't think non-living matter can do that. That to me is the fascination
of living organisms. They "control." (I guess I don't do away with all
universal dictums...)

All the best,
Erling

Erling,

Thank you so much for your educational feedback. Yes, I am completely new to the whole PCT, HPCT, etc world. So new that I admit CSG is an acronym I am unaware of.

But what really got my mind rolling is when you stated :

"For some reason, you stopped short. You cut the loop at what we in CSG
would say is a crucial omission. Our actions then induce our perceptions!
That is the pivotal insight behind seeing the effects of circular

causality implemented as negative feedback loops.

And when that last part of the loop is included, then our actions have
the opportunity for counteracting the effects of those initial (or
initiating) variables. We are no longer as subject to the whims or

vagaries of the external world. If some difference makes a difference to
us, then we can often stabilize it (as assessed by our perception of it)
at a value that works best for us."

In fact, I agree 100% with you however, never gave it a thought about actions countering those initial variables! BRILLIANT! Thank you kindly!

···

On Fri, Nov 4, 2011 at 11:28 AM, Erling Jorgensen ejorgensen@riverbendcmhc.org wrote:

[From Erling Jorgensen (2011.11.04 10:50 EDT)]

Kerry Patton (Fri, 4 Nov 2011 07:44:30 -0400)

Hello Kerry,

Let me respond to what you raise about “variables,” just as you have
asked.

Variables come in all forms. If something, anything, has the ability to
change or vary, it is construed as a variable.

So, you are making changeability a basic property of the (perceived)

universe. It seems to me that the whole idea of different layers of
perception, which in HPCT are proposed to be linked hierarchically, is
that various invariances are constructed with each new type of
perception. Yes, they then vary, but they are subsequently brought back

to stabilized values, by the operation of the organism. That is the
process we call control.

Like it or not, variables can arguably be construed as the single
universal entity which actually controls us as cognitively induced
species.

I start with a somewhat skeptical eye when it comes to universal dictums.

When we try to say everything in a condensed form, sometimes it turns
out that it doesn’t really lead us forward. I cannot tell whether it
is profound or trivial, to say that everything changes. One way to test

a dictum is to ask, “How could it be otherwise?”

I would actually expand on your formulation, borrowing from Gregory
Bateson (who maybe borrowed it from somebody else). I would say that
what matters are “differences that make a difference.” I think that gets

us closer to the actual import of changes for living control systems.

If we primarily focus on the ‘change’ aspect of variables, I think that
misses the profound insight raised when attention is drawn to what does

not change when it should be changing. It is stabilized variables
that are the real puzzle in the universe.

If you have been around the literature of PCT for a while, you might
recognize the criterion of ‘lack of change when there should be change’

as lying behind the mathematical implementation of a Stability Factor (I
think that’s what it’s called), in simulations such as Rick Marken’s
Mind Reading demo, which is really quite a persuasive demonstration of

principle.

Arguably, variables induce perceptions and perception influences thought
and our thoughts promote our actions.

For some reason, you stopped short. You cut the loop at what we in CSG

would say is a crucial omission. Our actions then induce our perceptions!
That is the pivotal insight behind seeing the effects of circular
causality implemented as negative feedback loops.

And when that last part of the loop is included, then our actions have

the opportunity for counteracting the effects of those initial (or
initiating) variables. We are no longer as subject to the whims or
vagaries of the external world. If some difference makes a difference to
us, then we can often stabilize it (as assessed by our perception of it)

at a value that works best for us.

I don’t think non-living matter can do that. That to me is the fascination
of living organisms. They “control.” (I guess I don’t do away with all
universal dictums…)

All the best,
Erling


Kerry Patton

Professor

Henley-Putnam University

(570)278-3618

To have been right. Of course after you understand the system (if
you get that far) you can revise your memory of what you thought you knew
before and claim you knew it all the time but just didn’t express it
well. But the difficulties with new ideas always arise before you
understand them and while you still defend the ideas that are going to be
replaced. It’s giving up the old idea that hurts and that people don’t
like to do. It’s less painful if you can feel you were right and that the
new idea must, somehow, be wrong.
I’m not lecturing at you, Tim; I’m lecturing at myself. As I’ve been
writing these things over the last day or so, I’ve been recognizing the
same symptoms in myself. Who has a bigger investment in PCT than I have?
Who is in greater danger of rejecting good ideas just because I don’t see
where they fit with my ideas? I’ve been loosening up in the last few
days, and seeing some new things that I’ve rejected before.
Specifically: model-based control. I’ve been saying that the PCT model
doesn’t need the internal models that we see in some of the OCT writings.
But just this morning I saw that we not only need them, but we already
have models in PCT. We call them “perceptions.” Behavior is not
the control of the reality outside the organism, outside ourselves. It is
control of an inner representation in which there are variables which act
as more or less faithful surrogates for those external variables we can’t
experience directly. That’s a model.
The way this came about was that I was thinking about the way two-level
position control works: the higher level controls a perception of the
position of a mass, and it does so by varying the reference signal sent
to a lower-order system that senses and controls velocity by varying an
output force. This is a very simple, neat system and it even seems to
adapt to changing loads but doesn’t have to change its properties at all
(see Demo 5-1, LCS3).
I was wondering, not for the first time, how best to make the control
model sense the velocity of the mass because that’s the key to the
apparent adaptation to changing loads. What would be the simplest
possible way? I realized that if the force were sensed (which we know
happens in Golgi tendon organs and in skin-pressure receptors), all we
would have to do would be to integrate the force signal to get a velocity
signal. Of course that wouldn’t really be like sensing the velocity
directly because if the mass changed, we’d somehow have to change the
proportionality factor in the integrator to match the new mass, but when
we’re talking about moving a forearm around the elbow joint, how often
would the forearm mass change? In fact, the integration of force would
give us a pretty good perception of velocity because it is a direct –
oh-oh – model of the way Newtonian physics turns force (torque)
acting on an arm into (angular) velocity.

After going through all that, I then saw that we really do need
perceptual input functions that detect velocity directly, because of
disturbances and those changing loads which do alter the mass being
moved. So the model idea has limits and can’t be the final answer. But
just going through this showed me, perhaps, how the people who proposed
that control systems had to contain internal models of the environment
were thinking. We do perceive the environment by perceiving the behavior
of models of the environment which are defined by perceptual input
functions and experienced as neural signals. PCT is not as far as it
seemed from OCT. Maybe a von Holst efference copy would serve as a
substitute for the signal from a Golgi organ, and if we integrate it, as
a substitute for a velocity signal. That would really work if there were
no unanticipated disturbances and no changes of properties in the control
loop. It wouldn’t work very well in the real world, but it’s not just a
wild idea plucked out of thin air.

The models really do exist. When we’re imagining the world, PCT
says that the output to lower systems is being routed directly to the
place where inputs from the lower system would be detected: the
imagination connection, as we call it. But there’s a perceptual input
function involved there, and except for the problem of masses and
disturbances, it generates a perceivable version of the external world.
When we’re planning how to do something, we even try to imagine the
relevant disturbances of the model and changes in properties of the
model, though it’s hard to do that accurately or in any realistic detail.
We run a mental model in imagination to see what happens when we try to
control it. That gets us close, and when we try to carry out the plan, we
can rely on the negative feedback in the lower systems to forgive us some
of our prediction errors as we forgive those who fail to predict their
own.

I’m sure there’s a lot more like that in the relationship of PCT to OCT.
It’s up to experts on both sides to find such points of commonality. If
we all look for them, the result will be a bigger and simpler model with
better capabilities and more realistic details. We can’t lose.

Best,

Bill

···

At 07:08 PM 11/3/2011 -0700, Tim Carey wrote:

Hi Bill,

BP: Not if you started out believing something that was disproven
on

the way to discovering the truth.

I don’t think I understand that from either the perspective of being
right or finding the truth.

My question was, paraphrased, are

you willing to search for and accept the truth even if that means
you

may have to admit you were wrong?

No, your question (or rather questions) was: So what is the highest goal
here? Is it to be right, or to know the truth?

Hello, Steve –

SS: I agree very much with your
comments. Neural implementation can be

difficult to predict from global brain performance. I’m spending
what

little extra time I have trying to watch how neural networks
organize

to learn multi-joint tasks (and hopefully generate interesting

emergent properties).

BP: I think we’ll get along fine. Really, though, you’re going to have to
make a hole in your busy schedule so you can at least look at the demos
in my Living Control Systems III: The fact of control. I haven’t
been trying to extend my model to neural circuitry, but short of that
we’ve been exploring the same territory and I’ve been doing that for
quite a while. Multijoint control has been in my models for many years.
Richard Kennaway has used the PTC model to create an avatar that can be
given a text and output it in American Sign Language as arm, hand, and
finger gestures (with some physical dynamics) that a deaf person can
recognize.

SS: I’m not sure what people in
this group think about Marr’s work (not

the 2D sketch). What I like from his ideas is the recognition that

there are different levels of models to describe different levels
of

brain processing.

BP: Well, since I said that in 1960 and have been elaborating on it ever
since, I do think it’s time for some PCT to make the short hop into OCT.
OCT really doesn’t have to reinvent all the wheels I’ve had turning for
50 years. I guess I really should follow that up by finding out who Marrs
is and what he has been saying – any papers you could email me?

SS: As Bill kindly defends my
position, I believe OCT

captures many aspects of motor behaviour, but its (very complex
formal

mathematics) does not capture neural implementation (i.e. role of

different brain regions or how individual neurons contribute to

behaviour).

BP: That’s what I thought. We in PCT are in the same position, but
working on it.

SS: I think an important thing
to remember is that it is dangerous to

simply believe your ideas/theories are correct and all others are

wrong. One must always have an open mind and let facts test
ideas.

This is the difference between religion and science.

BP: Right on, as you might expect from me considering my posts of the
last few days. Just remember to listen to your own admonitions, as I have
been listening to mine.

I hope you’re at least toying with the idea of attending and presenting
at the Control Systems Group (CSG) meeting next year, July 17-21, in
collaboration with the department of psychology and neuroscience at the
University of Colorado, Boulder. We don’t have any budget for inviting
VIPs like you, but Boulder isn’t a bad base for having a
vacation.

Best,

Bill

···

At 09:08 AM 11/4/2011 -0400, Stephen Scott wrote:

( Gavin
Ritz 2011.11.05.11.07NZT)

Hi Kerry

Here’s what I said to someone else.

Bernard
sent me a wonderful article by von Foerster
“On Constructing Reality�. (Bernard Scott is
one of the main actors in the second order Cybernetics field)

Here is the gist of it,
we are not interested in information or cybernetics or complexity theory or
Perceptual Control theory or chaos theory we are interested in “Creation theory�
“ or “Learning theory� or “On the Nature of Creativity�

That’s what we really
want to know, right???

Regards

Gavin

Folks,

I am utterly fascinated by this entire dialog for many reasons to
include those that have nothing to do with PCT or OCT rather the created human
system/ interface that is growing from this forum (actually, maybe that human
system/interface development is part of OCT and or PCT in one way or another).
Not my point….

I have a question that on the surface may appear very elementary for
many of you….

Behind this action called “Control,� aren’t some types of “variables�
in play here? Of course. But, has PCT and or OCT actually defined what a
“variable� actually is and if so, is this coming from a theoretical definition
or a scientific definition? I understand that some dictionaries define a
“Variable� as:

 /ˈvɛəriəbəl/  var·i·a·ble Show Spelled[vair-ee-uh-buhl] Show
IPAadjective

1.aptorliable to vary or change; changeable: variable weather;
variable moods.Â

2.capable of being varied or changed; alterable: a variable time limit
for completion of a book.Â

3.in constant; fickle: a variable lover.

4.having much variation or diversity.Â

5.Biology. deviating from the usual type, as a species or a specific
character.

So many dictionaries claim a “Variable� being one thing or another but
are there any academic papers written specifically about “Variables� that would
support this process we know as “Controlling?� (Variable centric and not
necessarily Control centric type of papers)……Papers that really dissect
variables is what I am looking for here.

Lastly, I would love for all of you to really rip the below apart.
Don’t worry, you won’t upset me or hurt my feelings. I wrote the below knowing
it is flawed. This is why I am asking the above questions.

“Variables come in all forms. If something, anything, has the ability
to change or vary, it is construed as a variable. Pretty much anything and
everything can be construed as a variable ranging from humans to terrain, water
to stars, temperature to feelings, energy to matter, etc. The universe which we
live in is filled with variables. They can easily be construed as the pieces
which make us who we are as beings. Like it or not, variables can arguably be
construed as the single universal entity which actually controls us as
cognitively induced species. Arguably, variables induce perceptions and
perception influences thought and our thoughts promote our actions.�

Kerry
Patton

···

On Thu, Nov 3, 2011 at 9:01 PM, Bill Powers powers_w@frontier.net wrote:

Hi, Tim –

At 01:15 PM 11/3/2011 -0700, Tim Carey wrote:

TC: Hmmm. If you knew
‘the truth’ wouldn’t that also mean you were right?

BP: Not if you started out believing something that was disproven on the way to
discovering the truth. My question was, paraphrased, are you willing to search
for and accept the truth even if that means you may have to admit you were
wrong?

TC: That aside, I’m
surprised at you’re suggestion that both theories purport to be theories about
how organisms control things. I get that both theories have something to do
with control but given that OCT is accepted by the people who study it (at
least one of those anyway but he’s published in Nature so that’s a pretty good
‘one’ to use as a benchmark) not to be a description of how the brain works,
how can it be a theory about the ‘how’ of control?

BP: There are many different ways to implement any given function in neurons.
One way to represent a position in two-dimensional space is to have one neuron
fire at a rate representing the X direction of measurement and a second neuron
fire at a rate representing the Y direction. That’s how the current PCT models
would do it. Another way is to have a single neuron in a specific place in a
two-dimensional map fire at some rate. Now the X-Y position is represented by a
single neuron’s location in the neural map. The latter is known to be the case
in visual areas of the brain, and also “somatatopic” areas. So we
know that the brain, in at least some areas, doesn’t work the way the PCT model
works, though both ways of representing position might be able to carry out the
same function in a control system. The only problem is that I can’t imagine how
that would be done under the mapping scheme. Some day, of course, we will
understand how but right now it’s still a mystery to me. Martin Taylor
thinks we can just ignore that problem, but I don’t.

I think you may be making too much of Stephen’s observation that the OCT model
doesn’t work the way the brain works. I think he means only that while certain
functions are proposed to be carried out by the brain, they are probably not
implemented in the same mathematical way an analyst would proceed.

TC: I don’t know that
this battle is going to get you any closer either to the truth or being right
but it’ll be great to see the experiments you come up with. More good stuff for
PCT.

BP: Well, we shall see. It could be that PCT has some mistakes in it that OCT
could correct. If we’re not willing to accept that possibility, we might as
well not do the testing, because that would say we don’t want to know the truth
if it’s not what we already believe.

Best,

Bill

Kerry Patton

Professor

Henley-Putnam
University

(570)278-3618

(Gavin Ritz
2011.11.05.13.35NZT)

Hi Kerry

Arguably, variables induce perceptions and perception influences
thought and our thoughts promote our actions.

I
thought I should answer this specific statement, with Bill’s thread from a
few days ago.

Living control systems, and most nonliving ones, control
their sensory inputs, not their motor
outputs
. (Powers)
**
Behavior of a living control system is the control of perception, not action (Powers)**

Heinz Von Foerster says we
construct (invent) our entire reality.

PCT’s key variable on the input side of
things is the Perceptual Controlled Variable, that’s any variable that’s
involved in the controlling part, not the environment (language being such a variable).
Have a look at the computer simulation in the “Fact of Control”. Put
it on your PC and Play with it. Or Rick Marken’s
Mind Readings http://www.mindreadings.com/marken.html

There is
a BBC program
on Mind games, if we think we actually see colours this program shows you practically
we invent it, it’s all in the reference signals (our neurological
systems). If you think we actually see 3D, we don’t we invent it. Some
great little tests to intrigue one.

Our
entire world we see and hear is just an energetic transduction, transduced just
the way the neural system sees fit. (I can hear all the roondi-goondis
on this comment)

So that’s
a bit different to your comment above.

Here’s
a question what do you think is really out there, if we invent so much. The universe
is made up of 4% baryonic matter (us) and 96% other stuff.

Whose
reality are we really living in???

So what is
it that we really want to know???

Regards

Gavin

···

(Gavin Ritz 2011.11.05.16.25NZT)

[From Erling Jorgensen
(2011.11.04 10:50 EDT)]

Kerry Patton (Fri, 4 Nov 2011 07:44:30 -0400)

One way to test a dictum is to ask, “How could it
be otherwise?”

Impossible to know if
like evolution says we just create new phase space (with all that wonderful new
protein configurations) that may become niches and then some do.

Worse we create our own realities,
is it possible because of human mirror neurons we’ve created a reality that
we may never be able to fully understand with the type of knowledge we’ve
created. We all only think we do?

I ask this question
rather, so what is it we really want to know???

Kind regards

Gavin

···

Hi, Warren et al –

WM: Hi all, I have always
regarded the imagination mode as the surrogate for internal ‘models’ in
PCT and that is my favourite chapter of the 1973 book. The following
researchers in Southampton (Rabinovich & Jennings, 2010) seem to have
realised that PCT involves an ‘egocentric’ model of user-perceived
reference values, and show it is far superior to alternative, established
predictive models in robotic systems:


http://eprints.ecs.soton.ac.uk/21139/1/ras.pdf

BP: My god, the first reference is to B:CP and there are two other
references to PCT. Whoopee.

By the way, reference values are not “perceived” directly. What
is perceived are perceptual signals generated by perceptual input
functions. Perceptual signals represent the current state of a variable,
whatever that state might be. Those signals are compared with reference
signals which specify some particular target value for the perceptual
signal. Since all signals in PCT are assumed to be scalar values, the
reference signal has to specify only a magnitude, not what kind of
perception is to be obtained. The kind of perception (that is, its
correlate in the external world) is determined by the organization of the
perceptual input function in the system receiving the reference signal.
The perceptual input function defines WHAT VARIABLE is to be controlled,
and the perceptual signal represents its current magnitude; the reference
signal specifies HOW MUCH of it is wanted.

WM: I would still like to know
what conditions encourage and constrain the imagination mode (and the
three others).

BP: So would I.

Best,

Bill