Why 99%, hierarchical perception

[From Rick Marken (921003.1100)]

Martin Taylor (921003)

The statistics can give a clue. If the
residual variance is proportional to x and to y, logarithms are the likely
answer. If it is more or less independent of x+y, then mutiplication is
probable. What then? If a logarithm can be developed in one part of the
hierarchy, is it not likely that it can be done in another part? Then perhaps
we should look for logarithmic relations elsewhere in the hierarchy. But
if it looks more as if the answer is multiplication, then a host of different
relations seem reasonable candidates as the CEVs in other situations.

Martin, trust me (I'm a Dr.). The "Las Vegas" approach to doing research
can now be chucked. No need to analyze residuals, no need to look for
patterns in noise. Just build the models and do what it takes to get them
to behave exactly (to the level you like -- say less than 5% error) like
the real system. Tom Bourbon (921002.1050) -- also a Dr.-- and
Bill Powers (921004.0800) -- not a Dr. but, even better, a genius --
explain the approach extremely well. The method of PCT is modeling.
The method of moribund psychology is statistics.

Bill Powers (921003.0600) --

A wonderful post on the perceptual basis of HPCT. Let me just put in
a quick plug for an approach to exploring perception that I
describe in my soon to be rejected "Hierarchical control of perception"
paper. I have described this on the net (maybe) but I do so again in the
hopes of jogging some of those other minds out there for some suggestions
-- and maybe develping some new HPCT demos.

The method described in my paper is very simple -- numbers alternate back
and forth on the screen so, with time going from top to bottom, what is
presented is:

5
     7
8
     2
4
     6
etc...

The rate of alternation can be varied by the observer. When the rate is
very fast (the max possible on the computer -- say about 15/sec -- all
you can see is the numbers -- their configuration. When you slow down the
rate of alteration you get to a point where you see the numbers "move"
back and forth, like the "phi phenomenon". Slow it doen even more and
you can start to see the "sequence" -- you can tell that 5 comes before
7, then comes 8 and then 2, etc. The sequence can by perceived only
when the alternation rate is about 4/sec. If you slow it down even more
you can "see" that there is a rule underlying the sequence -- if number
on left >= 5 then number on right is odd, else number on right is even.
The observer can know this rule in advance but cannot perceive it (at
least, this here observer can't) until the alternation rate is about
.25/sec.

This demo is not supposed to be earth shattering; it is just an attempt
to provide a helpful way for people to examine their own perceptions.
It seems to me that it might help someone understand what it means to perceive
a "configuration", a "transition", a "sequence" and a "program". The
variations in rate help you "isolate" the perceptions and see that it is
possible to have a low level perception (like transition) that
implies the possibility of a higher order perception (a sequence) and still
not be able to perceive the higher level percept (until the rate is slowed).
It is interesting that it seems to take longer to perceive "higher order"
perceptions but this, in itself, does not imply a hierarchical relationship
between the perceptions. Bill's logical test -- that you can't perceive
certain things unless you can perceive their constituents -- seems like a
better basis for claiming hierarchy. Interestingly, the transition perception
goes away when the rate slows too much -- it obviously depends on other
things too -- such as distance between the numbers. But, since we do
see the sequence even though transition is gone, it seems like sequence
perception does not depend on having a perception of transition -- but it
does depend on having a perception of configuration (since it's a sequence
of confugurations).

I think there must be "perceptual demos" of this sort that might help
to demonstrate some ways to look at perception (using our best and
most accessible lab -- our own brain) and see why at least some of us PCTers
think the H in HPCT represents quite a bit more than an opinion.

Best regards

Rick

···

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)

[Martin Taylor 921008 20:30] In response to Rick Marken's suggestion that we
continue a private discussion via CSG-L. In view of his suggestion, I hope
he will not object if I quote from that mail. If I'm wrong, I apologise in
advance.

I said in a posting just sent that I was going to put this off. But it might
be put off for a long time, so rather than do that, I'll try to be brief.

Rick (921005 15:55)

Why, for example, am I
wrong in suggesting that your data, which can be used (statistically) to
decide between x+y and x*y, could be used to discriminate between x*y and
log(x)+log(y)?

Why don't we do this on the net. I would have to have a little better
idea of exactly what you are deciding about (with respect to the model).
Why, for example, would you expect the variance of the error of prediction
to suggest anything about nature of the controlled variable? What is
the statistical analysis that you think is worthwhile for my data?

I'm not sure of the detail of the study (you probably described it somewhere
but I have forgotten), but let's assume that subjects were asked to "keep
the figure the same size" while you disturbed some aspect of it. You built
a simulation model that incorporated appropriate gains and delays, and tried
it with a perceptual input function of (x+y) and again with (x*y). You found
that the best you could do with (x+y) as an input function left you with
2% unexplained variance, but you could halve that error if you used (x*y).
I doesn't matter whether this is exactly what you did, because it can serve
to illustrate what I am talking about in any case.

What are the follow-on possibilities to this study? What have you found out?
I think you might be tempted to believe that you have shown the possibility
that perceptual input functions (PIFs) can include multiplication operators,
x and y both (presumably) being controllable CEVs in themselves. Now I propose
to you that this is false, and I propose as a counter-possibility that addition
operators and logarithm operators are possible, and that the "correct" PIF
is log(x)+log(y). It is (conceivably) important to distinguish these
possibilities, because they may have strong implications for the brain
structures involved not only with this percept, but with many other percepts.

What to do? If you have tested only with one "size", you can't tell these
possibilities apart very well, since the controlled value of either (x*y)
or (log(x)+log(y)) is a constant, and the errors are (by assumption) due
to non-infinite gains in some control loops associated with the task, which
affect the apparent overall gain of the "size" control loop. So, since
the error is a simple difference between reference signal and perceptual
signal, all we can do is estimate it and say "that's how good the control is."

But now let's introduce a new reference size, in which x and y are both
doubled. So the new level of (x*y) is four times the old reference level.
But the gain is, we assume, undisturbed, so we expect to see much the same
error in prediction. We have only changed the reference level, an additive
variable in the equation. The same is true if the subject is controlling
log(x)+log(y), but instead of multiplying the reference signal by 4, we have
added 0.6 units to whatever it was beforehand. If we now refer the variance
of our estimates to the size set as a reference for the subject, we find that
if the logarithmic hypothesis is correct, the residual variance is constant
in log size units, whereas if there is a multiplier operator in the PIF, the
residual variance is constant in area units.

I think the results would be contaminated by other effects, but the main idea
should be valid. It's a very simple kind of case, but it is typical of the
sort of thing that psychologists want to know: is shape recognition initiated
in the Right Hemisphere and symbolic in the Left, before being transferred
across to the other hemisphere for syntactic and pragmatic integration? That's
the kind of question of interest for solid practical reasons that apply to
cases of stroke. How could you do a non-statistical PCT study to address
such a question?

Martin

[From Rick Marken (921009.1230)]

Martin Taylor (921008 20:30)

In response to Rick Marken's suggestion that we
continue a private discussion via CSG-L. In view of his suggestion, I hope
he will not object if I quote from that mail. If I'm wrong, I apologise in
advance.

I don't object, I rejoice (well, anyway, yes, that's what I wanted).

I think you might be tempted to believe that you have shown the possibility
that perceptual input functions (PIFs) can include multiplication operators,
x and y both (presumably) being controllable CEVs in themselves. Now I
propose to you that this is false, and I propose as a counter-possibility
that addition operators and logarithm operators are possible, and that
the "correct" PIF is log(x)+log(y).

No problemo. I was just trying to show that the variable being controlled
was more like x*y than x+y; I didn't care how the neural signal was derived
from the sensory inputs, though I think it might be interesting to try to
figure it out -- especially if you are interested in how neural processing
works.

It is (conceivably) important to distinguish these
possibilities, because they may have strong implications for the brain
structures involved not only with this percept, but with many other percepts.

OK.

What to do?

Probabaly try to find the neural signal that corresponds to the controlled
variable and start dropping some single cell recorders in there.

But now let's introduce a new reference size, in which x and y are both
doubled. So the new level of (x*y) is four times the old reference level.
But the gain is, we assume, undisturbed, so we expect to see much the same
error in prediction. We have only changed the reference level, an additive
variable in the equation. The same is true if the subject is controlling
log(x)+log(y), but instead of multiplying the reference signal by 4, we have
added 0.6 units to whatever it was beforehand. If we now refer the variance
of our estimates to the size set as a reference for the subject, we find that
if the logarithmic hypothesis is correct, the residual variance is constant
in log size units, whereas if there is a multiplier operator in the PIF, the
residual variance is constant in area units.

I'm not sure I buy this derivation. What is the "variance of our estimates"
by the way? What is the "residual variance" here? the variance of the
the size of the quadrilateral relative to the reference size? In what
way is the residual variance "constant" in log size or area units. I don't
understand the analysis, I suppose. If you explain it to me I could do it
in a second since it's easy as pie to do the experiment -- I already
have the area control program ready to go and it's pretty easy to change the
size of the reference square. Maybe it's my bias but why not have the
reference square vary continuously in size? Then you could look at the
size of the error as a function of the size of the reference area. Maybe
that's what you want anyway -- maybe you are saying that the size of the
error will be constant with respect to log size rather than size?? Is that
it? I still don't get WHY this would be true if their is a log transform
applied to the inputs to the perceptual function. Maybe you could give
a simple mathematcal reason for this prediciton -- but keep it simple; I'm
certainly no math whiz, as, I imagine, you can tell.

I think the results would be contaminated by other effects, but the main idea
should be valid. It's a very simple kind of case, but it is typical of the
sort of thing that psychologists want to know.

Yes, and they miss the whole point. The fact that the person is controlling
a variable and that it took some doing to figure out what it is just goes
right by. Early astronomers wanted to know how the patterns they saw in
the night sky were related to one's personality. I know it's rude but,
frankly, I could care less what conventional psychologists want to know.
What they want to know is based on their preconceptions about how people
work; so they think it is important to know how reinforcement affects
behavior or which behavioral/cognitive capabilities are in the right
brain and which are in the left, etc etc. Since their preconceptions
about how people work are wrong (because they don't understand the
consequences of the fact that organisms are locked in a negative feedback
situation with respect to their environment -- cf. the blind men paper)
it is only by chance that anything psychologists might want to know about is
anything more than an illusion from the point of view of PCT.

Have a nice weekend all

Rick

···

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)

[Martin Taylor 921010 15:40]
(Rick Marken 921009.1230)

This really has to be brief. Sorry.

What is the "variance of our estimates"
by the way? What is the "residual variance" here? the variance of the
the size of the quadrilateral relative to the reference size? In what
way is the residual variance "constant" in log size or area units.

The residual variance is the 1% or 2% that you didn't account for. Since
the feedback loop function is the same for any level of the reference signal,
I assume that the error variance within the ECS, largely due to non-infinite
gain, is independent of the level of the reference signal, and that it is
a major contributor to the residual variance that you (experimenter) see.

Maybe it's my bias but why not have the
reference square vary continuously in size?

That would be better, because then you could determine the residual variance
as a function of size, and perhaps draw some tighter conclusions than my
two-point proposal would have allowed.

I think the results would be contaminated by other effects, but the main idea
should be valid. It's a very simple kind of case, but it is typical of the
sort of thing that psychologists want to know.

Yes, and they miss the whole point. The fact that the person is controlling
a variable and that it took some doing to figure out what it is just goes
right by. Early astronomers wanted to know how the patterns they saw in
the night sky were related to one's personality. I know it's rude but,
frankly, I could care less what conventional psychologists want to know.
What they want to know is based on their preconceptions about how people
work; so they think it is important to know how reinforcement affects
behavior or which behavioral/cognitive capabilities are in the right
brain and which are in the left, etc etc.

Well, I want to know how people work, even if you don't. It isn't enough for
me to accept that behaviour is controlled perception. I want to know where
the signals go (functionally), and what happens if you block this of that
path, how to deal with people suffering from stroke, why we have focussed
attention and what its limitations are, whether we use internal feedback
for short-term memory, and all sorts of questions like that.

Of course the details of what ANYONE wants to know are based on what they
think is missing in what they already believe. That's a first-level
statement from PCT. So what? If all you are interested in is a succession
of demonstrations that perception is controlled, then you are unlikely to
find much that is interesting to me. I would like to know where and how,
for example, perceptual signals are derived from multisensory inputs (why
a sound and a sight seem to come from the same object). The residual
variance proposal was a very peripheral example of that. I would like
to know whether a multiplier operator is likely to be a common phenomenon
in the hierarchy, or whether we should expect logarithmic compression to
be common. I'd bet on the latter.

Since their preconceptions
about how people work are wrong (because they don't understand the
consequences of the fact that organisms are locked in a negative feedback
situation with respect to their environment -- cf. the blind men paper)
it is only by chance that anything psychologists might want to know about is
anything more than an illusion from the point of view of PCT.

Some day, I'll write a posting about why conventional psychophysics should
for the most part be valid within PCT, but this isn't the time.

Martin