CROWD modifications; Contrast perception

[From Bill Powers (931018.0845 MDT)]

Avery Andrews (931018.1058) --

I cranked up the proximity factor (and turned down the avoid
dir gain), and it worked like you said it would, but then of
course it couldn't get to a goal that was right next to a wall
of people, like I said.

Yes, if there is a large bunch of people next to the destination,
the avoidance system will conflict hopelessly with the
destination-seeking system. This is sort of like dropping your
wristwatch into the lions' cage. You would really like to reach
in and retrieve it, but ...

So I still want to treat obstacles that are between the person
and a goal differently from those that aren't.

I didn't mean "hands off." Feel free to alter the model any way
you like. To play fair, however, you have to make the model
behave strictly on the basis of controlling what it can know from
its perceptual signals. This means you can't act like God,
telling the model that B is between A and C: the betweenness has
to be derived from the perceptual inputs if it's to figure
explicitly into controlled perceptions.

Right now there are only two perceptual signals representing
obstacles: total proximity left and total proximity right (both
weighted by angle from straight ahead). Individual objects are
not distinguished. The only way now for detecting that there is
an object between you and the destination is to sense equal and
significant proximity to left and right when you're headed
straight toward the destination. However, that can also signify
two objects at equal distances to the left and right.

It seems to me that the only way to achieve what you seem to want
is to make the perceptual system far more complex: give it the
ability to distinguish among proximities at various angles, in
effect increasing the angular resolution. Now, of course, if the
people turn out to have greater skills, nobody will be surprised.
When you make the system more complex, it SHOULD be able to act
more intelligently.

I should think that rather than changing the basic avoidance
system, you would be better off using what is there, but adding
layers that alter avoidance reference levels for different types
of obstacles (recognized as different at a higher level). So you
wouldn't be able to bring yourself to reach into the lion's cage
for your wristwatch, but would have no trouble with a cage full
of puppies. To do this you would just have to postulate multiple
perceptual systems, each capable of detecting a different
_quality_ of the proximity, as the system is now postulated to
know the difference between a destination, an obstacle, and
another person (if a person is being sought). In a bug model this
would amount to assuming left and right antennae equipped with
multiple sensors for detecting differences in concentrations of
various substances, independently but with the same geometry.

ยทยทยท

----------------------------------------------------------------
Rick Marken (931017.2000) --

Then "contrast" refers to one possible value of a perceptual
variable, viz. the perceived relationship between two word
perceptions. This means that there is NO possibility of a
perception of contrast when you hear just ONE word; there is
nothing to contrast it with. So when you hear "pin" you hear NO
contrast. That's all I've been trying to say all along.

There is a way to get a contrast signal from two perceptions that
don't occur at the same time. If the perceptual function averages
over some relatively long time, it can produce a difference
signal from perceptions that occur at different times. That is:
                                                   avg
      A A A A --->-a1

                                               avg
            B B B B ----------->a2

... and then the contrast perception p is computed as a2 - a1.

If we have a largish number of inputs A...G, occuring during
speech at random times, the (weighted) sum of these averaged
perceptions could be obtained. Comparing the sum-signal with a
reference-sum, we get an error signal which can be used to
provide an output signal that adjusts the gains for all the
individual perceptions up and down to maintain a constant sum.
Now signal A for example is

          A' = A/(A + B + C ...)

Each signal is now scaled to be a fraction of the total (or
average) of all different signals occuring over some length of
time (the length depending on the slowness of the perceptual
function above). The _pairwise_ contrast isn't explicitly
represented as a perception, but it's implicit in the group
contrast calculation.

One implication of this way of explaining contrast effects would
be that isolated phonemes would be harder to identify correctly
than phonemes heard in context. If you sat in silence for some
minutes (hours?), then heard a snippet you previously identified
as an "eh," you might report it as an "ih" or an "ah". Or if you
listened to speech with all the sounds systematically biased
toward the schwa, then after a short pause heard an isolated
"ih", you might report it as an "ee". Bruce?
---------------------------------
If the detection of formant frequencies is really at the bottom
of all this, then it seems essential to devise a perceptual
function that can report frequency as a variable, rather than
just amplitude at a fixed frequency. I'll be working on that.
----------------------------------------------------------------
Best to all,

Bill P.