"disambiguating HPCT artificial intelligence architecture"

[From Matti Kolu (2013.12.23.0330 CET)]

I found a question on Stackoverflow that concerns HPCT:
http://stackoverflow.com/questions/18666064/disambiguating-hpct-artificial-intelligence-architecture

It was posted some months ago. If someone here knows how the author
could go about to achieve his design goals it might be worth typing up
a reply and posting it on Stackoverflow, or perhaps here.

Here's the question:

···

----
I am trying to construct a small application that will run on a robot
with very limited sensory capabilities (NXT with
gyroscope/ultrasonic/touch) and the actual AI implementation will be
based on hierarchical perceptual control theory. I'm just looking for
some guidance regarding the implementation as I'm confused when it
comes to moving from theory to implementation.

The scenario

My candidate scenario will have 2 behaviors, one is to avoid
obstacles, second is to drive in circular motion based on given
diameter.

The problem

I've read several papers but could not determine how I should classify
my virtual machines (layers of behavior?) and how they should
communicating to lower levels and solving internal conflicts.

These are the list of papers I've went through to find my answers but
sadly could not

pct book http://www.livingcontrolsystems.com/download/pct_readings_ebook.pdf
paper on multi-legged robot using hpct
http://www2.cmp.uea.ac.uk/~jrk/PCT/jp9913p.pdf
pct alternative perspective
http://davidcenter.com/documents/BD%20Method/Ch10_PerceptualControl.pdf

and the following ideas are the results of my brainstorming:

* The avoidance layer would be part of my 'sensation layer' and that
is because it only identifies certain values like close objects e.g.
ultrasonic sensor specific range of values. The other second layer
would be part of the 'configuration layer' as it would try to detect
the pattern in which the robot is driving like straight line, random,
circle, or even not moving at all, this is using the gyroscope and
motor readings. 'Intensity layer' represents all sensor values so it's
not something to consider as part of the design.

* Second idea is to have both of the layers as 'configuration' because
they would be responding to direct sensor values from 'intensity
layer' and they would be represented in a mesh-like design where each
layer can send it's reference values to the lower layer that interface
with actuators.

My problem here is how conflicting behavior would be handled
(maneuvering around objects and keep running in circles)? should be
similar to Subsumption where certain layers get suppressed/inhibited
and have some sort of priority system? forgive my short explanation as
I did not want to make this a lengthy question.

/Y
----
Matti

[From Rick Marken (2013.12.23.1020)]

···

Matti Kolu (2013.12.23.0330 CET)–

MK: I found a question on Stackoverflow that concerns HPCT:

http://stackoverflow.com/questions/18666064/disambiguating-hpct-artificial-intelligence-architecture

RM: Here’s a quick answer to the person’s question.

  1. Forget about the levels. They are just suggestions and are of no use in building a working robot.

  2. A far better reference for the kind of robot the person wants to develop is the CROWD program, which is documented at http://www.livingcontrolsystems.com/demos/tutor_pct.html.

  3. The agents in the CROWD program do most of what the fellow wants the robot to do. So one way to approach the design is to try to implement the control systems in the CROWD programs using the sensors and outputs available for the NXT robot.

  4. Approach the design of the robot by thinking about what perceptions should be controlled in order to produce the behavior you want to see the robot perform. So, for example, if one behavior you want to see is “avoidance” then think about what avoidance behavior is (I presume it is maintaining a goal distance from obstacles) and the think about what perception, if kept under control, would result in you seeing the robot maintain a fixed distance from objects. I suspect it would be the perception of the time delay between sending and receiving of the ultrasound pulses.Since the robot is moving in two-space (I presume) there might have to be two pulse sensors in order to sense the two D location of objects.

  5. There are potential conflicts between the control systems that this person needs to build; for example, I think there could be conflicts between the system controlling for moving in a circular path and the system controlling for avoiding obstacles. The agents in the CROWD program have the same problem and sometimes get into dead end conflicts. There are various ways to deal with conflicts of this kind;for example, you could have a higher level system monitoring the error in the two potentially conflicting systems and have it make reduce the the gain in one system or the other if the conflict (error) persists for some time.

  6. I would suggest is having the person first design a prototype model of the robot using a computer program; it might be easier to test out different architectures that way.

Keep us posted on this, Matti. It sounds like a very interesting project.

It was posted some months ago. If someone here knows how the author

could go about to achieve his design goals it might be worth typing up

a reply and posting it on Stackoverflow, or perhaps here.

Here’s the question:


I am trying to construct a small application that will run on a robot

with very limited sensory capabilities (NXT with

gyroscope/ultrasonic/touch) and the actual AI implementation will be

based on hierarchical perceptual control theory. I’m just looking for

some guidance regarding the implementation as I’m confused when it

comes to moving from theory to implementation.

The scenario

My candidate scenario will have 2 behaviors, one is to avoid

obstacles, second is to drive in circular motion based on given

diameter.

The problem

I’ve read several papers but could not determine how I should classify

my virtual machines (layers of behavior?) and how they should

communicating to lower levels and solving internal conflicts.

These are the list of papers I’ve went through to find my answers but

sadly could not

pct book http://www.livingcontrolsystems.com/download/pct_readings_ebook.pdf

paper on multi-legged robot using hpct

http://www2.cmp.uea.ac.uk/~jrk/PCT/jp9913p.pdf

pct alternative perspective

http://davidcenter.com/documents/BD%20Method/Ch10_PerceptualControl.pdf

and the following ideas are the results of my brainstorming:

  • The avoidance layer would be part of my ‘sensation layer’ and that

is because it only identifies certain values like close objects e.g.

ultrasonic sensor specific range of values. The other second layer

would be part of the ‘configuration layer’ as it would try to detect

the pattern in which the robot is driving like straight line, random,

circle, or even not moving at all, this is using the gyroscope and

motor readings. ‘Intensity layer’ represents all sensor values so it’s

not something to consider as part of the design.

  • Second idea is to have both of the layers as ‘configuration’ because

they would be responding to direct sensor values from 'intensity

layer’ and they would be represented in a mesh-like design where each

layer can send it’s reference values to the lower layer that interface

with actuators.

My problem here is how conflicting behavior would be handled

(maneuvering around objects and keep running in circles)? should be

similar to Subsumption where certain layers get suppressed/inhibited

and have some sort of priority system? forgive my short explanation as

I did not want to make this a lengthy question.

/Y


Matti


Richard S. Marken PhD
www.mindreadings.com
The only thing that will redeem mankind is cooperation.

                                               -- Bertrand Russell