logic & imagination

In app. 3 months of hanging around philosophers, I have read
and heard a fair amount about `belief-desire' psychology, but
not seen much in the way of concrete development, so here's
a preliminary effort, for consideration, as musings in the
direction of a possible model. The basic idea is to use some
simple logic as part of a sort of `propositional control system'
running through imagination. It's very hazy in many respects,
but any thoughts people might have are welcome.

You, a tourist new to Australia, and an avid tide-pool life-form
freak, are standing in the customs control line at Kingston-Smith
airport, and overhear one person behind you saying to another:

  little white octopuses with blue rings on their tentacles are
    lethal if they bite you.

Some time later, peering into a tidepool, you see a little blue
octopus with white rings on its tentacles, but diverging from your
normal inclinations in such cases, you decline to pick it up.
What's going on? Here's a guess.

Upon hearing the sentence, you deposit in your `belief box', something
like this:

    x,y
    octopus(x)
    little(x)
    white(x)
    blue-ringed(x) ---> die(y)
    bite(x,y)
    person(y)

This is a putative LOT-version of `if a little white octopus with blue rings
(on its tentacles - I've abbreviated this) bites a person, they die'.
The notation is borrowed form the Kamp-Heim theory of `Discourse
Representation Semantics'.

Then, when you peer into the pool, to your belief box is added (courtesy
of the higher levels of the visual system):

    octopus(o)
    little(o)
    white(o)
    blue-ringed(o)

Due to your general knowledge of creatures, there is also already sitting
in your belief box:

    x,y
    animal(x)
    person(y) --> might [bite(x,y)]
    hold(x,y)

    person(i)
    me(i)

    x
    octopus(x) --> animal(x)

So much for beliefs. Now suppose one tentatively contemplates
embracing the goal `hold(o)'. How such goals are generated I
won't consider yet, but suppose that, wherever it comes from,
it goes into a `intention' box, where goals are put for
a while before actually being set as reference levels for motor
systems. An intention box is, I shall say, a species
of a general type of `supposition box', where possiblities are
entertained. Deductions in a supposition box can be carried out
using materials from the belief box, so from its initial contents:

  hold(i,o)

by logic (universal instantiation and modus ponens, for those into such
things), we first get

    might[bite(o,i)]

But, since this is a supposition box, we can strip off the `might', and get:

   bite(o,i)

and now by a bit more reasoning with what is in the belief box we get:

  die(i)

Which is a `negative goal', let us say, in the `diswant box' (it could
well just be a logically negative goal in a `want box'), but I'm currently
being impressed by the fact that Australian language usually have & make
great use of a grammatical ending to express such negative goals, so I'm
guessing that they are in some sense `primitive'. The basic
idea of the diswant box is that when it seems likely that something
described in it might happen, something happens to reduce or
eliminate this likelihood.

In particular, if putting something in the intention box produces a
result that matches something in a diswant box, then the original
intention tends to get suppressed. This is internal loop-closure.
So where do intentions come from? Mostly, I presume, from superordinate
ones - surviving a stay in the intention box is a sort of check that
a proposed (sub-)goal has to pass before it is allowed to actually
start sending off error signals (i.e. actually causing things to happen).

Goals in diswant boxes are amongst those that can generate intentions,
along the following lines:

Now suppose a child leans over the pool, and starts reaching for the
octopus, you do something like grab its arm and push it away from the
pool, or at least say `stop', `No' or something like that. What's going
on here, I suspect, is that we have in our diswant box a fairly strong
aversion to children dying:

    x
    child(x)
    die(x)

Actually, to anyone dying, but I suspect it is stronger in the case of
children. There is also a sort of `projection' mechanism, which scans
what's going on environment, and generates predications about what, ceteris
paribus, is likely to happen, perhaps embodied in rules like:

   x,y
   person(x,y) --> will[hold(x,y)]
   reach_for(x,y)

So, from watching the child reaching for the octopus we wind up
with a supposition box saying (`will' strips like `might' does):

   die(c)
   child(c)

which is subsumed by something in the diswant box, so an error
signal is generated. What this error signal does is provide a search
for possible actions that will avert the looming consequences. Presumably
many different actions could be considered, scored, & the best one chosen
(somehow - the image in my head is that when the logic constructs these
derivations it is actually setting up connections in some sort of dynamic
system that tries to find an equilibrium value wwhere the stuff in the
diswant boxes has value 0 & the want boxes value 1, but I'm not ready to
take that part of it on yet).

  Suppose we also have a principle that says:

    x,y,z
    person(x)
    person(y)
    action(z) will[stop(y,z)
    doing(y,z)
    says(x,"NO!!")

What we want is that putting this in the intention box will cancel the
implication that the child will wind up holding the octopus, and therefore
that it might die (which means we're into default logic). There also has
to be a logic of `events', which I have left out, though there is quite a
lot of stuff writtenabout it. Supposing that this intention
wins the competition (is judged the best bet of keeping the stuff in
the diswant boxes from being satisfied), it then gets set as a reference
level, and out comes the word.

Comments Welcome

Avery.Andrews@anu.edu.au