[From Rupert Young (2014.09.05 14.00)]
This is quite long, but I think you will find it enlightening.
The Taros () conference
was held this week in Birmingham over three days with the last day
being an industry day, and I spoke in the afternoon. The first two
days were PhD students presenting papers of their research. There
were also a couple of high-profile keynote speakers.
The main surprise to me was that no-one, as I recall, mentioned
“purpose”, which I would have thought would have been the main
pre-requisite of research on autonomous systems.
I had two main concerns prior to attending this conference, one,
that everybody would be already doing PCT-like research and that I
would have nothing original to add, and the other that they would be
using very different methodologies and wouldn’t see any benefits in
a PCT approach. Of course the latter turned out to be the case, but
in a manner far worse than I would have thought. It turned out, it
seemed to me, that the methodologies that they were using were not
just different, but were highly invalid and could not work for
autonomous systems (see below). In fact they didn’t seem to
understand the difference between autonomous and automaton.
I won’t go through all the papers but give a taste of the sorts of
methodologies used.
Use of a robot with a visual sensor to build a structured map of a
warehouse environment (just the pillars) for the future objective of
guiding a vehicle around the warehouse for the automatic inventory
and mapping of stock. The robot knows its own position by way of
lasers and determines the position of pillars by extracting visual
information from images. Not clear how the robot is moved around.
A robot was manually driven around a route as the teach phase, while
recording visual features. It then had to repeat the route by
adjusting its position to match the current features with those
recorded. This actually has some elements reminiscent of PCT as it
concerned reducing the difference between a target set of features
and the current set, but with a complex “comparator” function.
Some suggestions for the way forward to build such a robot. Although
sounds like a good candidate and acknowledges tight sensori-motor
coupling there didn’t seem to be any PCT concepts recognised.
Optimisation of the relative positions of aerial vehicles to form a
communication network with sets of ground vehicles. A genetic
algorithm is used to optimise the parameters and generate flying
manoeuvres. This also has some parallels with PCT as it concerns
changing positions to maintain a set of values within certain
limits. Whether there is any formal equivalence perhaps those
mathematically minded could investigate. This probably could be
modelled with a PCT approach, by regarding the vehicles as
independent purposive control units, but does it matter if this
system works? What would be the criticisms of this system from a PCT
perspective? Perhaps, that it treats everything at a single level
and could benefit from hierarchy with higher-levels. Perhaps that it
is unnecessarily complex and that PCT provides framework that is
more easily understood, and can be applied to other domains.
Other papers covered topics such as multi-agent drone exploration,
which did have stabilisation against disturbances and PID
controllers, formal logic model of behaviour priorities for
planning, consequence engine for “ethical” robot, a wearable battery
unit powered by urine, tactile sensor processing (actually visual
recognition of deformation) and a robot for folding clothes (quite
impressive but using traditional computer vision techniques and
kinematics).
Although there were some minor similarities with PCT and a few
instances of equivalent controllers for simple variables there
certainly wasn’t any acknowledgement of perceptual control or
hierarchical control and very little, if any, recognition of
autonomous agents as purposive systems.
But the main problem was exemplified by the methodology described in
this paper: The objective was to come up with a model of the interactions
between humans and robots for handing over objects. The way this was
approached was to observe the behaviour of many real instances of
the handover of objects between humans, and try to extract, by
computer vision techniques, some consistencies of variables such as
position and speed of limbs. I spoke to one of the authors who said
this was very difficult because there were many variations. Well, , I screamed in my head, as you’re trying to
model the variations inherent in the differing observed external
circumstances of a system controlling an internal goal. I did
suggest it might be better to model the system from the perspective
of the (purposive) system, but that seemed to fall on deaf ears.
This methodology, of modelling (specific) behaviour might be
dismissed if it were not for the two main speakers of the week.
There was a keynote speaker Prof. Yiannis Demiris (Imperial College
London) with his talk and the IET (Institution of Engineering and Technology) Public
Lecture: Prof. Sethu Vijayakumar (University of Edinburgh) ?
Both of these speakers reiterated this approach as their fundamental
methodology, in order to construct feed-forward models. Demiris
justified this approach with the old chestnut that 150ms lag was too
slow for feedback control so a predictive model was necessary.
Similarly, Vijayakumar cited an experimental learning task that he
claimed supported feed-forward models. The latter talk should be
available online soon so I will come back to this when it is. I
found it quite incredible that they thought that modelling behaviour
was a viable approach. The main consequences of the approach are
that every different type of behaviour has to be modelled separately
and resulting implementations are automatons rather than autonomous.
Incidentally, after the Demiris talk a questioner did actually
mention Perceptual Control Theory. I spoke to him afterwards and he
was Prof. Alan Winfield from BRL (Bristol Robotics Laboratory). He
said he didn’t know much about PCT but was reading a paper by Roger
K. Moore, but didn’t think he really believed it (PCT). He had also
seen a talk, or slides, by Ted Cloak.
So, I think there is good news and bad news if this is
representative of the global state of robotics. The good news is
that the perceptual control approach is unique and has the potential
to progress robotics far beyond what can be achieved by the current,
flawed, methodology. The bad news is how entrenched the current
methodology is within the robotics community, meaning that it is
going to be difficult for a different approach to make headway,
unless it can demonstrate something impressive or solve some issue
not handled by the prevailing methodology. The main problem, I
think, is that the current researchers have good resources and
technologies that show, on the surface, some quite funky looking
demonstrations, which I will point out when the above lecture is
available.
I did give a talk, which I hope will also be available online soon.
I gave it on the last afternoon, so my trepidation was mounting over
the three days as I realised that I was going to be contradicting
these prestigious speakers, and Prof Aaron Sloman, a big name in the
philosophy of AI was in attendance. As I was also presenting a
methodology that would significantly reduce the complexity of
modelling, within a new paradigm, the least I expected was argument
or abuse. But when it came to questions there was just silence! Then
one, industry, guy did ask about memory, but that was almost it.
Afterwards a woman (computer science lecturer) did come up to me
saying that perceptions were not goals, but help us get to goals.
Though she didn’t explain what goals were, I did try to explain a
bit and she ended up going away saying she would think about it.
Another person, Prof Tony Pipe, from BRL, said the talk was
interesting, though don’t know if he was just being polite, as I’d
talked to him previously a bit about it, and about a programme he is
running for Robotics Innovation, which I hope to join.
On the whole it was very interesting and I got some useful contacts,
such as for the above programme. Although it was slightly depressing
seeing the current misguided state of robotics it actually gave me
more hope and confidence that we have something unique and
significantly more viable to offer than is currently available. The
challenge though is whether we can navigate through the
opportunities that are undoubtedly out there and find the resources
and innovation to leap-frog over the current technology, and not end
up on the same pile as betamax.
···
http://taros.org.uk/
** Modeling of a Large Structured Environment: With a Repetitive
Canonical Geometric-Semantic Model**
** Monte Carlo Localization for Teach-and-Repeat Feature-Based
Navigation**
** Bioinspired Mechanisms and Sensorimotor Schemes for Flying: A
Preliminary Study for a Robotic Bat**
** Evolutionary Coordination System for Fixed-Wing Communications
Unmanned Aerial Vehicles**
** CogLaboration: Towards Fluent Human-Robot Object
Handover Interactions**
** of
course there were**
Towards Personal Assistive Robotics** Robots
that Learn: The Future of Man or the ‘Man of the Future’**
-- Regards,
Rupert