Difference between PCT and Subsumption Architecture. Also help desgining an ABM using PCT

Dear IAPCT,

Some background:
I am currently building an Agent Based Model (ABM) to investigate how people attribute intentions/behaviours/goals to agents. This is similar to the Heider and Simmel experiment, except that the shapes are self-directed agents, rather than handdrawn animation. The joy (and sometimes also intimidating aspect) is that it is completely up to me what behaviour the agents produce; I am thinking along the lines of dancing, chasing, hiding, for example.

Key to the ABM is the agent architecture. This is an area I am new to, and I see PCT as a potentially useful theory, as it has a way a to explicitly represent the agent’s goals and intentions as a reference signal. Other roboticists/AI theorists I have been reading include Valentino Braitenberg, Ross Ashby, Grey Walter and in particular Rodney Brooks. PCT seems to follow naturally from here.

I hope can ask two sets of questions 1) questions I have as a PCT novice that I will ask to help me clear up some conceptual confusion 2) questions on how to acutally construct a PCT architecture for my ABM.

Clearing Conceptual Confusion
Brooks’ subsumption architecture is, on the face of it, very similar to HPCT. Am I right in thinking that you could include subsumption architecture as a case of a hierarchical control system? I have read Kennaway’s paper on a control system for a walking robot, and he distances subsumption architecture from PCT: firstly, this is because higher layers in the subsumption architecture can act directly on the actuators; secondly, it is suggested that because subsumption architecture is built as an Input-Output device, not as a control system, it is excluded from being PCT.

I am interested by what is meant by the second point, because Brooks’ approach was a rejection of the more open-loop Sense-Model-Plan-Act GOFAI paradigm, and is described as layered “sensorimotor loops”. Are these loops not closed-loop control? If you read the description of the first 3 layered subsumption architecture robot built by Brooks and his MIT team (which is available towards the end of section 6 in Intelligence without Representation), you see a lot of the finite state machines in the network act as comparators (albeit sometimes as binary switches, rather than generating a continuous error signal). In what ways is Subsumption Architecture comparable to PCT? If an agent is described as being an Input-Output device, does this preclude it from being a closed-loop control system (I presume so)?

Practical Questions
The main reason I have for making this post is to get some general advice on how I should build my ABM. These are questions mainly related to creating more complex higher-order control loops. Dealing with multi-dimensional signals, for instance. I can create very simple behaviours, like following, or chasing, by setting the control variable to distance from other agent. Are there some more sophisticated techniques I can employ?

I have implemented a physics engine, and my robot can actuate change by applying torque and force, which in turn changes angular velocity and velocity. I plan for my first-order control layer to control the velocity and angular velocity through effecting change by applying these forces, and I can follow the example of Kennaway’s simulation. It is really with the higher-order inter-agent control loops where I am on shakier ground.

Many thanks,
Alex

Alex, thanks for posting this. Several folks have been particularly engaged with robotics. I look forward to their responses.

1 Like

RM: I think this is an interesting idea, Alex. I’ve written a bit on this subject. Here’s a pointer to a paper of mine that I think is relevant. It’s not specifically about how people attribute intentions but, rather, about the accuracy of these attributions.

RM: In PCT an intentional agent is a control system; the intention of such systems is to keep some perceptual variable (or variables) in reference states specified by the system itself. It’s not clear to me whether your Agent Based Models will be input control systems or programmed output generators. If they are the latter then none of them will actually be intentional agents (as in the Heider and Simmel experiment); if they are input control systems then they are all intentional agents.

RM: So the question in your research is apparently about what observable characteristics of behavior lead people to say that the behavior was done intentionally, whether it was actually done intentionally or not. I am developing a little demonstration/game based on the idea that only control systems are intentional agents (they intend to produce particular results) and that the only way to determine whether the behavior under observation is that of an intentional agent is to test to determine whether it is the behavior of a control system.

RM: My little demo/game is aimed at showing how this determination can be made correctly and reliably using a PCT-based approach to understanding behavior called the Test for the Controlled Variable. A “prototype” version of the demo/game is here.

RM: I hope you find it interesting.

Best

Rick

My impression of what Brooks has written, including the cited paper, is that the architecture is never described in terms of control systems, having a perception, a reference, and an output designed to always bring the perception closer to the reference. He talks instead about behaviours and agents, the agents being, in his words, “simple finite state machines”. He does not describe, in that paper at least, how the finite state machines were programmed. It all looks like ad hoc hacking that would only by accident build something that could be interpreted as a set of control systems.

I never took my simulated robot much farther. I did use a four-level hierarchy to have it balance a broomstick on its back, similarly to Powers’ inverted pendulum controller, the robot taking the place of the carriage on the track. All of this was in simulation, which has the disadvantage of making things too easy. Physical devices never behave as nicely as the simulation, even if you try to simulate physical devices.

I’ll be interested to see what larger hierarchies of control systems you come up with.

I may have some more time at the end of the month to answer more extensively, but a bit tied up at the moment… I did mention Brook’s stuff briefly in my paper Re-writing robotics – Perceptual Robots. As I recall Brook’s system did have one PCT-like controller, but most were not (they were repetitive open-loop devices rather than closed-loop control systems).

Thank you for everyone for these responses which have been extremely helpful as I am writing up my dissertation. As I read more deeply into this topic, I realise how interesting it is!