[From Matti Kolu (2013.12.23.0330 CET)]
I found a question on Stackoverflow that concerns HPCT:
http://stackoverflow.com/questions/18666064/disambiguating-hpct-artificial-intelligence-architecture
It was posted some months ago. If someone here knows how the author
could go about to achieve his design goals it might be worth typing up
a reply and posting it on Stackoverflow, or perhaps here.
Here's the question:
···
----
I am trying to construct a small application that will run on a robot
with very limited sensory capabilities (NXT with
gyroscope/ultrasonic/touch) and the actual AI implementation will be
based on hierarchical perceptual control theory. I'm just looking for
some guidance regarding the implementation as I'm confused when it
comes to moving from theory to implementation.
The scenario
My candidate scenario will have 2 behaviors, one is to avoid
obstacles, second is to drive in circular motion based on given
diameter.
The problem
I've read several papers but could not determine how I should classify
my virtual machines (layers of behavior?) and how they should
communicating to lower levels and solving internal conflicts.
These are the list of papers I've went through to find my answers but
sadly could not
pct book http://www.livingcontrolsystems.com/download/pct_readings_ebook.pdf
paper on multi-legged robot using hpct
http://www2.cmp.uea.ac.uk/~jrk/PCT/jp9913p.pdf
pct alternative perspective
http://davidcenter.com/documents/BD%20Method/Ch10_PerceptualControl.pdf
and the following ideas are the results of my brainstorming:
* The avoidance layer would be part of my 'sensation layer' and that
is because it only identifies certain values like close objects e.g.
ultrasonic sensor specific range of values. The other second layer
would be part of the 'configuration layer' as it would try to detect
the pattern in which the robot is driving like straight line, random,
circle, or even not moving at all, this is using the gyroscope and
motor readings. 'Intensity layer' represents all sensor values so it's
not something to consider as part of the design.
* Second idea is to have both of the layers as 'configuration' because
they would be responding to direct sensor values from 'intensity
layer' and they would be represented in a mesh-like design where each
layer can send it's reference values to the lower layer that interface
with actuators.
My problem here is how conflicting behavior would be handled
(maneuvering around objects and keep running in circles)? should be
similar to Subsumption where certain layers get suppressed/inhibited
and have some sort of priority system? forgive my short explanation as
I did not want to make this a lengthy question.
/Y
----
Matti