rotten apples; control of logical conditions

[From Bill Powers (950303.0945 MST)]

Martin Taylor (950302.1300)--

     I take "perception" to represent the output of a perceptual
     function. But "perception of" is something else again.
     "Perception of" is an assertion by an an outside agent (i.e.
     another perceptual system, whether in the same hierarchy or
     another) whose input is not only the perceptual signal, but also
     some aspect of the "real" world. "Perception of" is a statement
     that some perceptual signal correlates with some other perception
     of the real world.

This is a very clear statement of the problem. The problem is
exacerbated when the two people think they are talking about the same
thing in the external world, and don't realize that there is an
undefined term:

Given: Your perception of X
        My perception of X

Then: Your perception corresponds to my perception.
        Our perceptions correspond to X (where X is still undefined).

     ... if I happen to have in my hierarchy a PIF that gives a strong
     output in the presence of rotten apples, but not of rotten grapes,
     I claim that I truly perceive a rotten apple--whether or not it is
     a scented plastic imitation. The facts of the real world are never
     known to me, but my perceptions may change when new sensory data
     come available. What I perceive NOW is truly what I perceive.

When you have a PIF (perceptual input function) that reports the
presence of a rotten apple smell, there is no need to infer or imagine
its presence. This observation is infallible -- unless you go on to
claim that the apple would indeed prove to be brown and mushy inside if
you opened it up. THAT would be an inference. If there is no other apple
nearby, the inference might prove out 99.9% of the time, but the claim
would still be an inference, not an observation, a perception.

     But a process that says "I perceive red. I perceive round. I
     perceive dark small line protruding from round... Therefore I have
     an apple" is quite different from perceiving an apple.

Exactly what I was getting at. You put the problem as succinctly as
possible in saying that it is represented by "perception of ...". This
very common usage begs the question (meaning that the statement or
question assumes without proof the very issue being stated or
questioned, as in "Have you stopped beating your wife?"). Discussions of
epistemology are constantly running afoul of this logical shoal. "When
you see an apple, is the apple really there?" The only proper answer is
another question: "What apple are you talking about?"

One of my favorite diagrams concerning perception is the one that shows
a right-side-up vertical arrow in the environment, an upside-down image
of it on the retina (with optical rays connecting the appropriate
points), and a pathway into the brain where the same arrow is again
shown. The naivete is charming. The same diagram, of course, applies to
us as we view the diagram, and so on forever.

Anyway, have you also considered this subject as it applies to
"information about ..."?

···

----------------------------------------------------------------------
Bruce Abbott (950302.1600 EST)--

     We operant guys would say that the two responses, keypecking and
     turning, are both under "stimulus control," meaning that the
     probability of observing each response depends on which stimulus
     (red or green) is currently present. What controlled variable is
     being disturbed? Why does behavior change when the "discriminative
     stimulus" changes?

I think we need to take these questions seriously. We have never tried
to model the situation where one perception appears to work as a signal
rather than as a controlled variable in itself. It's possible, of
course, that this interpretation of the role of the perception is
misleading, but we need to find the "correct" description and show that
it makes at least as much intuitive sense. The problem is similar to
that of showing that some "stimuli" should really be interpreted as
disturbances. Once you can see exactly what is disturbed, and how the
control action counteracts the disturbance and _appears_ to be caused by
the stimulus, the PCT interpretation becomes at least as believable as
the other one. We need to do this for the case of "discriminative
stimuli" or else admit that some stimuli serve as triggers for other
control processes.

     I didn't say that it would be particularly difficult to develop a
     coherent PCT account of these phenomena, only that it needs to be
     done and that I'm starting to think about it. I'd certainly
     welcome your insights.

Probably the most direct solution is the one Rick Marken suggests:
consider that there is control of a logical condition:

  p = (Red AND peck) OR (Green AND turn-in-circle)
  r = TRUE

There are two contributing lower-order perceptions under the organism's
control (peck, turn-in-circle) and two that are independently variable
(Red, Green). As I interpret the description, there are two additional
relationships imposed by the environment: red XOR green is true, meaning
that the light is either green or red but never both, and peck XOR turn-
in-circle is true, meaning it is physically impossible to do both at
once.

We can investigate our guess about the nature of the controlled
perception by trying the various combinations of disturbances and seeing
if the behavior always changes to keep the perception defined above
matching the assumed reference condition, TRUE. For example, we could
turn one light blue, which is neither red nor green, which would make
the perception FALSE no matter what pattern of behavior the organism
controlled. This should lead to anomalous behavior, perhaps switching
back and forth between the two behavior patterns; this behavior would
cease only after a long time because there is a permanent error signal.

On the other hand, we could turn both lights on at once. This makes the
proposition TRUE no matter which behavior pattern is selected, so we
might expect one pattern to be picked at random, after which it would be
maintained because there is no error to produce another switch.

Of course there are many other possibilities to investigate. Is a green
light equal to not-red? Or does not-red mean that the light is off?
There are other possible logical perceptual functions: for example,
logical implication may be involved, as in "It is not the case that A is
true and B is false." Logical implications have their own peculiarities:
if A implies B, the proposition is false ONLY if A is true and B is
false. All other combinations leave the implication TRUE -- even A false
and B false.

The only way to find out what logical function (if any that we can
understand using Boolean algebra) describes the perceptual function is
to test various hypotheses -- the good old Test again. The actual
behavior involved, pecking or turning in a circle, is of little interest
in itself, however striking it may be to the experimenter. What we would
be investigating would be the logical function of perceptual variables
that is under control by the organism.

To do this it is also necessary to look carefully at the logic of the
experiment. What is the contingency for the case when both lights are
on, or both off, or when a light of a new color is shown? If the animal
turns in circles, pecking at the key once each time around, what is the
contingency? It's very easy to set up the logic of an experiment with a
particular set of relationships in mind, and forget that you have to
cover all combinations of true and false, not just the ones you first
thought of. Bill Leach will no doubt support this observation;
forgetting to cover all logical possibilities is a pitfall of electronic
logic design. Whichever condition you forgot to provide for is almost
certain to be the next one that occurs.

     I can easily write a simulation that switches control systems in
     response to changing input. I don't see any conceptual roadblock
     to the notion that we living control systems can do what that
     simulation does.

Right. The problem is to find the _right_ logic, and by using the Test
with the real organism, to demonstrate that it accounts for behavior
under all the possible conditions.

P.S. Got the program for your meeting. Having PCT represented at a non-
PCT meeting by two people, you and Dennios Delprato, will be a first for
us!
---------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 950303 16:50]

Bill Powers (950303.0945 MST)

On the difference between "perceiving" and "perception of":

Anyway, have you also considered this subject as it applies to
"information about ..."?

Yes, indeed. I've tried to make it explicit, quite a few times.

When you have a PIF (perceptual input function) that reports the
presence of a rotten apple smell, there is no need to infer or imagine
its presence. This observation is infallible -- unless you go on to
claim that the apple would indeed prove to be brown and mushy inside if
you opened it up. THAT would be an inference.

Probably so. But if you'll excuse the expression, this paragraph opens
up a whole can of worms. Since I will be away and not able to continue
the discussion until Monday 13 at the earliest, I'm not sure whether to
use one of the worms to bait the hook--but why not?

By talking of a PIF that reports a "rotten apple smell" rather than one
that reports a "rotten apple" perceptual signal, you are changing the
terms of the debate. If there exists a "rotten apple" PIF, it probably
takes as input (a transformation of) the "rotten apple smell" perceptual
signal, in addition to (transformations of) a host of visually based and
perhaps tactually based perceptual signals. It isn't just the smell. The
point is that the perception is "of" a rotten apple (outside observer's view).
The perception is a signal in a control system that acts so as to change
the magnitude of the perception of rotten apple, not one that acts so as
to obscure the smell.

But the worms are elsewhere in the can of apple-sauce. A "rotten apple"
is a category with a verbal label. The worms lie there and in the
sporadic thread on "association." I'm really dubious about starting this
right now, because it really requires a longish essay for which I don't
have time. But I'll try it, and see whether the discussion of the next ten
days includes it (in a fruitful, apple-pie, way, non-rotten :-).

To begin, consider the flip-flop connection of perceptual functions, in
which the output of one PIF is connected with negative sign to the input
of another, and vice-versa. Such a connection is not envisaged in the
standard hierarchy, but I think (as I have previously argued and will not
argue again here) that it is a necessary element of the transition to the
category level of perception. The way it works is that if there are
two patterns of perception that often occur in a common context, but
never both together, then the perceptual functions develop mutually
inhibitory linkages. More to the point, actions that increase one of
the two perceptions always decrease the other, and the inhibitory connection
assists the relevant control.

The flip-flop connection is a long-standing electronic connection, the
basis of computation and logical operations. But it is pure speculation
whether it exists in the nervous system--although its absence would be
a big surprise!

What the flip-flop connection does is to ensure that there is hysteresis
in the output perceptions of both control systems that are so connected.
If the data changes smoothly between category "A" and category "B", the
data value at which the perception changes from A to B is more "B-like"
than is the data value at which perception changes from B to A. Furthermore,
the perception of "A" tends to be maintained even when the data for "A"
is reduced in magnitude, and the same for the perception of "B".

The flip-flop connection works well for three mutually inhibitory connections,
and probably quite a few more, if the gain of the inhibitory links is not too
high (I've never tried more than three).

There's another side to this. What happens if the cross-link between
perceptual functions is positively weighted, such that when the data make
the perceptual signal "A" increase, the "A" output makes the value of
perceptual signal "X" increase, and vice versa? Obviously, if the loop
gain is greater than unity, what happens is a disastrous lockup. It would
be a positive feedback runaway. But if the loop gain is only slightly
positive, much less than unity, what happens is that "X" is more likely
to be perceived when the data conform to "A" than when the data conform to
"B". I call this an "associative connection" between A and X.

If there is a flip-flop connection between X and Y, and an associative
connection between B and Y, an interesting pattern emerges. When the data
are "A-like", the raised perception of "A" depresses the perceptual signal
of "B" and enhances that of "X". The enhanced "X" depresses "Y", further
depressing "B" and enhancing "A". The "A-B..." set of perceptions are
associated with the "X-Y..." set of perceptions in such a way that the
occurrence of any member of one group enhances the likelihood that the
corresponding member of the other group will be perceived and depresses
the likelihood that a non-corresponding member of the other group will
be perceived.

The "X-Y..." group can be thought of as "labels" for the "A-B..." group,
and vice-versa. "X-Y..." may actually BE verbal labels for categorical
perceptions "A-B...". And depending on the linkage strengths, it is possible
that the perceptions in the "X-Y..." group could even be evoked by data
that generate a strong perceptual signal in a member of the "A-B..." group.
Seeing a red patch might evoke a (shadowy) perception of the word "red."

This mechanism works for analogue perceptions at all levels of the hierarchy,
including, in particular, the relational level at which we perceive things
as being above, below, to the left of, inside... other things. To perceive
something as being inside something else might be to evoke (somewhat) the
verbal perception "inside." And to hear the word "inside" might evoke
(shadowily) the vision of something inside something else.

What you get is a structure that produces relatively strong perceptual
outputs to analogies. A flower "inside" a flowerpot is analogous to a dot
"inside" a square, and to a person "inside" a room. These are very weak
analogies, but when the same relationships occur among the members of one
group of things as among members of a quite different group, there are
many associations that can be evoked, producing an output related to the
control of one perception when the data coming in are from something
quite different--the analogy structure. If the analogy turns out to be
a good one, that control output will be effective, and the analogy will
be stable against reorganization.

There is lots more to this, but what I want to get back to is the notion of
the LABEL "rotten apple." The rotten apple is one that damages neighbouring
apples, and "the rotten apple in the barrel" might be the corrupt policeman
who damages the others in the same precinct. The perception of "the
rotten apple" might be as much due to the verbal label as to the sensory
smell and the brown spot; nevertheless, it could be a true perception, not
an inference.

I can perceive the opening of Beethoven's Fifth Symphony quite well, merely
by being exposed to those words (it happened just now) with no orchestra
or recording present. It's not an inference, but a very pleasant sequence
of sounds. That's what I perceive, but not when the radio is playing other
music (as it is now but wasn't when I wrote the start of this paragraph).

Anyway, I'll make an flat-out assertion, which I may change at some later
date: There is a specific part of the hierarchy in which flip-flop
connections and associative connections occur. The output of this part
is the category level of perception. The "specific part" is an interface
between the analogue part of the hierarchy and a discrete, logical, part
of the hierarchy. It is not a "level" as such, in that the interface
meets all levels of the analogue hierarchy.

See you Monday 13 (probably).

Martin

<[Bill Leach 950303.18:40 EST(EDT)]

[Bill Powers (950303.0945 MST)]

Bill Leach will no doubt support this observation; forgetting to cover
all logical possibilities is a pitfall of electronic logic design.
Whichever condition you forgot to provide for is almost certain to be
the next one that occurs.

Extensive person experience with this 'phenomenon' has enabled me to
further refine the relevent law (name escapes me just now).

The problem has been of course, the "almost" part. For perfect
predictive capability one only needs to know one additional item of
information about the 'system' and that is the 'surprise factor'.

If the surprise factor is greater than 1500 units, one can drop the
"almost" entirely. If the engineers' surprise factor can be held below
100 units the uncovered possibility will not occur -- ever. Infact,
studies to test this have shown that even when test conditions are
established such that the uncovered possiblity is the ONLY possible
outcome it still won't occur.

Further research on surprise factor in the range between 100 and 1500 is
needed to remove the remaining ambiguity.

-bill