Taros Review

[From Rupert Young (2014.09.05 14.00)]

This is quite long, but I think you will find it enlightening.

The Taros () conference

was held this week in Birmingham over three days with the last day
being an industry day, and I spoke in the afternoon. The first two
days were PhD students presenting papers of their research. There
were also a couple of high-profile keynote speakers.
The main surprise to me was that no-one, as I recall, mentioned
“purpose”, which I would have thought would have been the main
pre-requisite of research on autonomous systems.
I had two main concerns prior to attending this conference, one,
that everybody would be already doing PCT-like research and that I
would have nothing original to add, and the other that they would be
using very different methodologies and wouldn’t see any benefits in
a PCT approach. Of course the latter turned out to be the case, but
in a manner far worse than I would have thought. It turned out, it
seemed to me, that the methodologies that they were using were not
just different, but were highly invalid and could not work for
autonomous systems (see below). In fact they didn’t seem to
understand the difference between autonomous and automaton.
I won’t go through all the papers but give a taste of the sorts of
methodologies used.
Use of a robot with a visual sensor to build a structured map of a
warehouse environment (just the pillars) for the future objective of
guiding a vehicle around the warehouse for the automatic inventory
and mapping of stock. The robot knows its own position by way of
lasers and determines the position of pillars by extracting visual
information from images. Not clear how the robot is moved around.
A robot was manually driven around a route as the teach phase, while
recording visual features. It then had to repeat the route by
adjusting its position to match the current features with those
recorded. This actually has some elements reminiscent of PCT as it
concerned reducing the difference between a target set of features
and the current set, but with a complex “comparator” function.
Some suggestions for the way forward to build such a robot. Although
sounds like a good candidate and acknowledges tight sensori-motor
coupling there didn’t seem to be any PCT concepts recognised.
Optimisation of the relative positions of aerial vehicles to form a
communication network with sets of ground vehicles. A genetic
algorithm is used to optimise the parameters and generate flying
manoeuvres. This also has some parallels with PCT as it concerns
changing positions to maintain a set of values within certain
limits. Whether there is any formal equivalence perhaps those
mathematically minded could investigate. This probably could be
modelled with a PCT approach, by regarding the vehicles as
independent purposive control units, but does it matter if this
system works? What would be the criticisms of this system from a PCT
perspective? Perhaps, that it treats everything at a single level
and could benefit from hierarchy with higher-levels. Perhaps that it
is unnecessarily complex and that PCT provides framework that is
more easily understood, and can be applied to other domains.
Other papers covered topics such as multi-agent drone exploration,
which did have stabilisation against disturbances and PID
controllers, formal logic model of behaviour priorities for
planning, consequence engine for “ethical” robot, a wearable battery
unit powered by urine, tactile sensor processing (actually visual
recognition of deformation) and a robot for folding clothes (quite
impressive but using traditional computer vision techniques and
kinematics).
Although there were some minor similarities with PCT and a few
instances of equivalent controllers for simple variables there
certainly wasn’t any acknowledgement of perceptual control or
hierarchical control and very little, if any, recognition of
autonomous agents as purposive systems.
But the main problem was exemplified by the methodology described in
this paper: The objective was to come up with a model of the interactions
between humans and robots for handing over objects. The way this was
approached was to observe the behaviour of many real instances of
the handover of objects between humans, and try to extract, by
computer vision techniques, some consistencies of variables such as
position and speed of limbs. I spoke to one of the authors who said
this was very difficult because there were many variations. Well, , I screamed in my head, as you’re trying to
model the variations inherent in the differing observed external
circumstances of a system controlling an internal goal. I did
suggest it might be better to model the system from the perspective
of the (purposive) system, but that seemed to fall on deaf ears.
This methodology, of modelling (specific) behaviour might be
dismissed if it were not for the two main speakers of the week.
There was a keynote speaker Prof. Yiannis Demiris (Imperial College
London) with his talk and the IET (Institution of Engineering and Technology) Public
Lecture: Prof. Sethu Vijayakumar (University of Edinburgh) ?
Both of these speakers reiterated this approach as their fundamental
methodology, in order to construct feed-forward models. Demiris
justified this approach with the old chestnut that 150ms lag was too
slow for feedback control so a predictive model was necessary.
Similarly, Vijayakumar cited an experimental learning task that he
claimed supported feed-forward models. The latter talk should be
available online soon so I will come back to this when it is. I
found it quite incredible that they thought that modelling behaviour
was a viable approach. The main consequences of the approach are
that every different type of behaviour has to be modelled separately
and resulting implementations are automatons rather than autonomous.
Incidentally, after the Demiris talk a questioner did actually
mention Perceptual Control Theory. I spoke to him afterwards and he
was Prof. Alan Winfield from BRL (Bristol Robotics Laboratory). He
said he didn’t know much about PCT but was reading a paper by Roger
K. Moore, but didn’t think he really believed it (PCT). He had also
seen a talk, or slides, by Ted Cloak.
So, I think there is good news and bad news if this is
representative of the global state of robotics. The good news is
that the perceptual control approach is unique and has the potential
to progress robotics far beyond what can be achieved by the current,
flawed, methodology. The bad news is how entrenched the current
methodology is within the robotics community, meaning that it is
going to be difficult for a different approach to make headway,
unless it can demonstrate something impressive or solve some issue
not handled by the prevailing methodology. The main problem, I
think, is that the current researchers have good resources and
technologies that show, on the surface, some quite funky looking
demonstrations, which I will point out when the above lecture is
available.
I did give a talk, which I hope will also be available online soon.
I gave it on the last afternoon, so my trepidation was mounting over
the three days as I realised that I was going to be contradicting
these prestigious speakers, and Prof Aaron Sloman, a big name in the
philosophy of AI was in attendance. As I was also presenting a
methodology that would significantly reduce the complexity of
modelling, within a new paradigm, the least I expected was argument
or abuse. But when it came to questions there was just silence! Then
one, industry, guy did ask about memory, but that was almost it.
Afterwards a woman (computer science lecturer) did come up to me
saying that perceptions were not goals, but help us get to goals.
Though she didn’t explain what goals were, I did try to explain a
bit and she ended up going away saying she would think about it.
Another person, Prof Tony Pipe, from BRL, said the talk was
interesting, though don’t know if he was just being polite, as I’d
talked to him previously a bit about it, and about a programme he is
running for Robotics Innovation, which I hope to join.
On the whole it was very interesting and I got some useful contacts,
such as for the above programme. Although it was slightly depressing
seeing the current misguided state of robotics it actually gave me
more hope and confidence that we have something unique and
significantly more viable to offer than is currently available. The
challenge though is whether we can navigate through the
opportunities that are undoubtedly out there and find the resources
and innovation to leap-frog over the current technology, and not end
up on the same pile as betamax.

···

http://taros.org.uk/
** Modeling of a Large Structured Environment: With a Repetitive
Canonical Geometric-Semantic Model**
** Monte Carlo Localization for Teach-and-Repeat Feature-Based
Navigation**
** Bioinspired Mechanisms and Sensorimotor Schemes for Flying: A
Preliminary Study for a Robotic Bat**
** Evolutionary Coordination System for Fixed-Wing Communications
Unmanned Aerial Vehicles**
** CogLaboration: Towards Fluent Human-Robot Object
Handover Interactions**
** of
course there were**
Towards Personal Assistive Robotics** Robots
that Learn: The Future of Man or the ‘Man of the Future’**


-- Regards,
Rupert

[From Dag Forssell (2014.09.05 19.50)]

Rupert,

This is fantastic. If your presentation becomes available as a video, please let us know.

Best, Dag

···

At 05:41 AM 9/5/2014, you wrote:

[From Rupert Young (2014.09.05 14.00)]

This is quite long, but I think you will find it enlightening.

The Taros (<21st Towards Autonomous Robotic Systems Conference - TAROS 2020 - The University of Nottingham) conference was held this week in Birmingham over

[From Rick Marken (2014.09.06.1005)]

···

Rupert Young (2014.09.05 14.00)–

RY: This is quite long, but I think you will find it enlightening.

RM: Very much so. Thanks, Rupert! Looks like you will have the same problem with roboticists that I have had with scientific psychologists.

Good luck!

Best

Rick


-- Regards,
Rupert
The Taros ([http://taros.org.uk/](http://taros.org.uk/)    ) conference

was held this week in Birmingham over three days with the last day
being an industry day, and I spoke in the afternoon. The first two
days were PhD students presenting papers of their research. There
were also a couple of high-profile keynote speakers.

The main surprise to me was that no-one, as I recall, mentioned

“purpose”, which I would have thought would have been the main
pre-requisite of research on autonomous systems.

I had two main concerns prior to attending this conference, one,

that everybody would be already doing PCT-like research and that I
would have nothing original to add, and the other that they would be
using very different methodologies and wouldn’t see any benefits in
a PCT approach. Of course the latter turned out to be the case, but
in a manner far worse than I would have thought. It turned out, it
seemed to me, that the methodologies that they were using were not
just different, but were highly invalid and could not work for
autonomous systems (see below). In fact they didn’t seem to
understand the difference between autonomous and automaton.

I won't go through all the papers but give a taste of the sorts of

methodologies used.

**      Modeling of a Large Structured Environment: With a Repetitive

Canonical Geometric-Semantic Model**

Use of a robot with a visual sensor to build a structured map of a

warehouse environment (just the pillars) for the future objective of
guiding a vehicle around the warehouse for the automatic inventory
and mapping of stock. The robot knows its own position by way of
lasers and determines the position of pillars by extracting visual
information from images. Not clear how the robot is moved around.

**      Monte Carlo Localization for Teach-and-Repeat Feature-Based

Navigation**

A robot was manually driven around a route as the teach phase, while

recording visual features. It then had to repeat the route by
adjusting its position to match the current features with those
recorded. This actually has some elements reminiscent of PCT as it
concerned reducing the difference between a target set of features
and the current set, but with a complex “comparator” function.

**      Bioinspired Mechanisms and Sensorimotor Schemes for Flying: A

Preliminary Study for a Robotic Bat**
Some suggestions for the way forward to build such a robot. Although
sounds like a good candidate and acknowledges tight sensori-motor
coupling there didn’t seem to be any PCT concepts recognised.

**      Evolutionary Coordination System for Fixed-Wing Communications

Unmanned Aerial Vehicles**
Optimisation of the relative positions of aerial vehicles to form a
communication network with sets of ground vehicles. A genetic
algorithm is used to optimise the parameters and generate flying
manoeuvres. This also has some parallels with PCT as it concerns
changing positions to maintain a set of values within certain
limits. Whether there is any formal equivalence perhaps those
mathematically minded could investigate. This probably could be
modelled with a PCT approach, by regarding the vehicles as
independent purposive control units, but does it matter if this
system works? What would be the criticisms of this system from a PCT
perspective? Perhaps, that it treats everything at a single level
and could benefit from hierarchy with higher-levels. Perhaps that it
is unnecessarily complex and that PCT provides framework that is
more easily understood, and can be applied to other domains.

Other papers covered topics such as multi-agent drone exploration,

which did have stabilisation against disturbances and PID
controllers, formal logic model of behaviour priorities for
planning, consequence engine for “ethical” robot, a wearable battery
unit powered by urine, tactile sensor processing (actually visual
recognition of deformation) and a robot for folding clothes (quite
impressive but using traditional computer vision techniques and
kinematics).

Although there were some minor similarities with PCT and a few

instances of equivalent controllers for simple variables there
certainly wasn’t any acknowledgement of perceptual control or
hierarchical control and very little, if any, recognition of
autonomous agents as purposive systems.

But the main problem was exemplified by the methodology described in

this paper: ** CogLaboration: Towards Fluent Human-Robot Object
Handover Interactions**
The objective was to come up with a model of the interactions
between humans and robots for handing over objects. The way this was
approached was to observe the behaviour of many real instances of
the handover of objects between humans, and try to extract, by
computer vision techniques, some consistencies of variables such as
position and speed of limbs. I spoke to one of the authors who said
this was very difficult because there were many variations. Well, ** of
course there were** , I screamed in my head, as you’re trying to
model the variations inherent in the differing observed external
circumstances of a system controlling an internal goal. I did
suggest it might be better to model the system from the perspective
of the (purposive) system, but that seemed to fall on deaf ears.

This methodology, of modelling (specific) behaviour might be

dismissed if it were not for the two main speakers of the week.
There was a keynote speaker Prof. Yiannis Demiris (Imperial College
London) with his talk Towards Personal Assistive Robotics
and the IET (Institution of Engineering and Technology) Public
Lecture: Prof. Sethu Vijayakumar (University of Edinburgh) ** Robots
that Learn: The Future of Man or the ‘Man of the Future’**?

Both of these speakers reiterated this approach as their fundamental

methodology, in order to construct feed-forward models. Demiris
justified this approach with the old chestnut that 150ms lag was too
slow for feedback control so a predictive model was necessary.
Similarly, Vijayakumar cited an experimental learning task that he
claimed supported feed-forward models. The latter talk should be
available online soon so I will come back to this when it is. I
found it quite incredible that they thought that modelling behaviour
was a viable approach. The main consequences of the approach are
that every different type of behaviour has to be modelled separately
and resulting implementations are automatons rather than autonomous.

Incidentally, after the Demiris talk a questioner did actually

mention Perceptual Control Theory. I spoke to him afterwards and he
was Prof. Alan Winfield from BRL (Bristol Robotics Laboratory). He
said he didn’t know much about PCT but was reading a paper by Roger
K. Moore, but didn’t think he really believed it (PCT). He had also
seen a talk, or slides, by Ted Cloak.

So, I think there is good news and bad news if this is

representative of the global state of robotics. The good news is
that the perceptual control approach is unique and has the potential
to progress robotics far beyond what can be achieved by the current,
flawed, methodology. The bad news is how entrenched the current
methodology is within the robotics community, meaning that it is
going to be difficult for a different approach to make headway,
unless it can demonstrate something impressive or solve some issue
not handled by the prevailing methodology. The main problem, I
think, is that the current researchers have good resources and
technologies that show, on the surface, some quite funky looking
demonstrations, which I will point out when the above lecture is
available.

I did give a talk, which I hope will also be available online soon.

I gave it on the last afternoon, so my trepidation was mounting over
the three days as I realised that I was going to be contradicting
these prestigious speakers, and Prof Aaron Sloman, a big name in the
philosophy of AI was in attendance. As I was also presenting a
methodology that would significantly reduce the complexity of
modelling, within a new paradigm, the least I expected was argument
or abuse. But when it came to questions there was just silence! Then
one, industry, guy did ask about memory, but that was almost it.
Afterwards a woman (computer science lecturer) did come up to me
saying that perceptions were not goals, but help us get to goals.
Though she didn’t explain what goals were, I did try to explain a
bit and she ended up going away saying she would think about it.

Another person, Prof Tony Pipe, from BRL, said the talk was

interesting, though don’t know if he was just being polite, as I’d
talked to him previously a bit about it, and about a programme he is
running for Robotics Innovation, which I hope to join.

On the whole it was very interesting and I got some useful contacts,

such as for the above programme. Although it was slightly depressing
seeing the current misguided state of robotics it actually gave me
more hope and confidence that we have something unique and
significantly more viable to offer than is currently available. The
challenge though is whether we can navigate through the
opportunities that are undoubtedly out there and find the resources
and innovation to leap-frog over the current technology, and not end
up on the same pile as betamax.


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Frans Plooij (2014.10.08.1242)]

Has anybody seen this?

Huang, BBS with commentaries, Goals and competing motives.pdf (668 KB)

···

Rupert Young (2014.09.05 14.00)–

RY: This is quite long, but I think you will find it enlightening.

RM: Very much so. Thanks, Rupert! Looks like you will have the same problem with roboticists that I have had with scientific psychologists.

Good luck!

Best

Rick


-- Regards,
Rupert
The Taros ([http://taros.org.uk/](http://taros.org.uk/)    ) conference

was held this week in Birmingham over three days with the last day
being an industry day, and I spoke in the afternoon. The first two
days were PhD students presenting papers of their research. There
were also a couple of high-profile keynote speakers.

The main surprise to me was that no-one, as I recall, mentioned

“purpose”, which I would have thought would have been the main
pre-requisite of research on autonomous systems.

I had two main concerns prior to attending this conference, one,

that everybody would be already doing PCT-like research and that I
would have nothing original to add, and the other that they would be
using very different methodologies and wouldn’t see any benefits in
a PCT approach. Of course the latter turned out to be the case, but
in a manner far worse than I would have thought. It turned out, it
seemed to me, that the methodologies that they were using were not
just different, but were highly invalid and could not work for
autonomous systems (see below). In fact they didn’t seem to
understand the difference between autonomous and automaton.

I won't go through all the papers but give a taste of the sorts of

methodologies used.

**      Modeling of a Large Structured Environment: With a Repetitive

Canonical Geometric-Semantic Model**

Use of a robot with a visual sensor to build a structured map of a

warehouse environment (just the pillars) for the future objective of
guiding a vehicle around the warehouse for the automatic inventory
and mapping of stock. The robot knows its own position by way of
lasers and determines the position of pillars by extracting visual
information from images. Not clear how the robot is moved around.

**      Monte Carlo Localization for Teach-and-Repeat Feature-Based

Navigation**

A robot was manually driven around a route as the teach phase, while

recording visual features. It then had to repeat the route by
adjusting its position to match the current features with those
recorded. This actually has some elements reminiscent of PCT as it
concerned reducing the difference between a target set of features
and the current set, but with a complex “comparator” function.

**      Bioinspired Mechanisms and Sensorimotor Schemes for Flying: A

Preliminary Study for a Robotic Bat**
Some suggestions for the way forward to build such a robot. Although
sounds like a good candidate and acknowledges tight sensori-motor
coupling there didn’t seem to be any PCT concepts recognised.

**      Evolutionary Coordination System for Fixed-Wing Communications

Unmanned Aerial Vehicles**
Optimisation of the relative positions of aerial vehicles to form a
communication network with sets of ground vehicles. A genetic
algorithm is used to optimise the parameters and generate flying
manoeuvres. This also has some parallels with PCT as it concerns
changing positions to maintain a set of values within certain
limits. Whether there is any formal equivalence perhaps those
mathematically minded could investigate. This probably could be
modelled with a PCT approach, by regarding the vehicles as
independent purposive control units, but does it matter if this
system works? What would be the criticisms of this system from a PCT
perspective? Perhaps, that it treats everything at a single level
and could benefit from hierarchy with higher-levels. Perhaps that it
is unnecessarily complex and that PCT provides framework that is
more easily understood, and can be applied to other domains.

Other papers covered topics such as multi-agent drone exploration,

which did have stabilisation against disturbances and PID
controllers, formal logic model of behaviour priorities for
planning, consequence engine for “ethical” robot, a wearable battery
unit powered by urine, tactile sensor processing (actually visual
recognition of deformation) and a robot for folding clothes (quite
impressive but using traditional computer vision techniques and
kinematics).

Although there were some minor similarities with PCT and a few

instances of equivalent controllers for simple variables there
certainly wasn’t any acknowledgement of perceptual control or
hierarchical control and very little, if any, recognition of
autonomous agents as purposive systems.

But the main problem was exemplified by the methodology described in

this paper: ** CogLaboration: Towards Fluent Human-Robot Object
Handover Interactions**
The objective was to come up with a model of the interactions
between humans and robots for handing over objects. The way this was
approached was to observe the behaviour of many real instances of
the handover of objects between humans, and try to extract, by
computer vision techniques, some consistencies of variables such as
position and speed of limbs. I spoke to one of the authors who said
this was very difficult because there were many variations. Well, ** of
course there were** , I screamed in my head, as you’re trying to
model the variations inherent in the differing observed external
circumstances of a system controlling an internal goal. I did
suggest it might be better to model the system from the perspective
of the (purposive) system, but that seemed to fall on deaf ears.

This methodology, of modelling (specific) behaviour might be

dismissed if it were not for the two main speakers of the week.
There was a keynote speaker Prof. Yiannis Demiris (Imperial College
London) with his talk Towards Personal Assistive Robotics
and the IET (Institution of Engineering and Technology) Public
Lecture: Prof. Sethu Vijayakumar (University of Edinburgh) ** Robots
that Learn: The Future of Man or the ‘Man of the Future’**?

Both of these speakers reiterated this approach as their fundamental

methodology, in order to construct feed-forward models. Demiris
justified this approach with the old chestnut that 150ms lag was too
slow for feedback control so a predictive model was necessary.
Similarly, Vijayakumar cited an experimental learning task that he
claimed supported feed-forward models. The latter talk should be
available online soon so I will come back to this when it is. I
found it quite incredible that they thought that modelling behaviour
was a viable approach. The main consequences of the approach are
that every different type of behaviour has to be modelled separately
and resulting implementations are automatons rather than autonomous.

Incidentally, after the Demiris talk a questioner did actually

mention Perceptual Control Theory. I spoke to him afterwards and he
was Prof. Alan Winfield from BRL (Bristol Robotics Laboratory). He
said he didn’t know much about PCT but was reading a paper by Roger
K. Moore, but didn’t think he really believed it (PCT). He had also
seen a talk, or slides, by Ted Cloak.

So, I think there is good news and bad news if this is

representative of the global state of robotics. The good news is
that the perceptual control approach is unique and has the potential
to progress robotics far beyond what can be achieved by the current,
flawed, methodology. The bad news is how entrenched the current
methodology is within the robotics community, meaning that it is
going to be difficult for a different approach to make headway,
unless it can demonstrate something impressive or solve some issue
not handled by the prevailing methodology. The main problem, I
think, is that the current researchers have good resources and
technologies that show, on the surface, some quite funky looking
demonstrations, which I will point out when the above lecture is
available.

I did give a talk, which I hope will also be available online soon.

I gave it on the last afternoon, so my trepidation was mounting over
the three days as I realised that I was going to be contradicting
these prestigious speakers, and Prof Aaron Sloman, a big name in the
philosophy of AI was in attendance. As I was also presenting a
methodology that would significantly reduce the complexity of
modelling, within a new paradigm, the least I expected was argument
or abuse. But when it came to questions there was just silence! Then
one, industry, guy did ask about memory, but that was almost it.
Afterwards a woman (computer science lecturer) did come up to me
saying that perceptions were not goals, but help us get to goals.
Though she didn’t explain what goals were, I did try to explain a
bit and she ended up going away saying she would think about it.

Another person, Prof Tony Pipe, from BRL, said the talk was

interesting, though don’t know if he was just being polite, as I’d
talked to him previously a bit about it, and about a programme he is
running for Robotics Innovation, which I hope to join.

On the whole it was very interesting and I got some useful contacts,

such as for the above programme. Although it was slightly depressing
seeing the current misguided state of robotics it actually gave me
more hope and confidence that we have something unique and
significantly more viable to offer than is currently available. The
challenge though is whether we can navigate through the
opportunities that are undoubtedly out there and find the resources
and innovation to leap-frog over the current technology, and not end
up on the same pile as betamax.


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rick Marken (2014.10.11.1535)]

···

Frans Plooij (2014.10.08.1242)–

FP: Has anybody seen this?

RM; No, I hadn’t seen it. But based on a quick perusal of the paper it doesn’t seem like I missed much.

Best

Rick


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rick Marken (2014.10.15.1715)]

···

On Wed, Oct 15, 2014 at 12:40 PM, richardpfau4153@aol.com wrote:

Richard Pfau (2014.10.15 15:30 EDT) to Frans Plooij (2014.10.08.1242)]

Frans,

Thank you for passing on the article by Julie Y. Huang and John A. Bargh entitled “The Selfish Goal: Autonomously Operating Motivational Structures as the Proximate Cause of Human Judgment and Behavior” along with the open peer commentary and Huang and Bargh’s responses that follow.

In addition to being informative, it was gratifying to see Huang and Bargh refer several times to Powers (1973) and control theory (ex., on pp. 159,162, and 163) including, when discussing goal conflict and cooperation, their statement:

“here we suggest that cybernetic models of control (e.g., Perceptual Control theory; Powers, 1973) may help address these issues, as well as the open questions regarding goal dynamics”

and then a few paragraphs later when discussing control systems, stating that:

“Along with others (Carver & Scheier 2002, p. 305), we find this framework congenial for developing more systems-based understandings of individual behavior because people can be viewed as organizations of self-regulating feedback systems” (p. 163).


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rick Marken (2014.10.15.1715)]

Oops, pushed the wrong button before completing my post. I’ll try again.

···

RM: Here’s another quote from the paper: Â

Richard Pfau (2014.10.15 15:30 EDT) to Frans Plooij (2014.10.08.1242)]

Â
Frans,
Â
Thank you for passing on the article by Julie Y. Huang and John A. Bargh entitled “The Selfish Goal: Autonomously Operating Motivational Structures as the Proximate Cause of Human Judgment and Behavior” along with the open peer commentary and Huang and Bargh’s responses that follow.
Â
In addition to being informative, it was gratifying to see Huang and Bargh refer several times to Powers (1973) and control theory (ex., on pp. 159,162, and 163) including, when discussing goal conflict and cooperation, their statement:
Â
“here we suggest that cybernetic models of control (e.g., Perceptual Control theory; Powers, 1973) may help address these issues, as well as the open questions regarding goal dynamics”
Â
and then a few paragraphs later when discussing control systems, stating that:
Â

“Along with others (Carver & Scheier 2002, p. 305), we find this framework congenial for developing more systems-based understandings of individual behavior because people can be viewed as organizations of self-regulating feedback systems” (p. 163).Â

Here, we present the Sel�sh Goal model, which holds that these inconsistencies in judgment and behavior can be meaningfully understood as the output of multiple, and in some cases, competing goal influences. Whether conscious or unconscious, every goal essentially programs particular sets of behaviors to be enacted by the person pursuing that goal.

RM: This does not sound even close to what would be said by people who understand PCT. I personally prefer that people reject or ignore PCT than accept something that they say is PCT (or Powers’ theory of behavior) when it is not anything like PCT but just the SOS. But that’s just me; I like my PCT neat.Â

BestÂ

Rick


Richard S. Marken, Ph.D.
Author of  Doing Research on Purpose
Now available from Amazon or Barnes & Noble