Ambiguous figures (was Self Interest)

[Martin Taylor 2004.04.27.23.11]

[From Bill Powers (2004.04.27.0233 MST)]

Martin Taylor 2004.04.26.0.22 --

B.W.: Would perceptual shifts like necker (sp?) cubes be an
instance of this?

I don't know. It's possible. I hadn't thought about things that way.

Necker cubes are not really cubes. We do not get those flips with real
cubes unless we view them with one eye, or at a distance so large that
binocular vision fails to resolve the ambiguity. A Necker cube is a
2-dimensional drawing in which either of two three-dimensional
interpretations is equally valid (and Martin claims he can see more than
two).

I don't think that logic comes into play here -- at least it's not required.

I don't claim I can see more than two. I did an experiment with
people who had never been told what to see, and the two cubes were
not even necessarily the first things they saw among the dozen or
more that a subject might see in a few minutes of looking. We just
asked them to say what they saw.

It's only when you are told that you should expect to see a cube from
one direction or the other that you see the "classic" two cubes and
nothing else. I think logic does come into play, at least in the form
of language.

The same held when we asked people what they heard in a short
repeating tape loop. They were told that it was actually changing
subtly. Some of the subjects were told that it would change only to
English words, whereas the other was told that it might change into
nonsense. And that's what they heard (not just what they reported
they heard, according to complementary measures).

Martin

[From Bill Powers (2004.04.28.0757 MST)]

Martin Taylor 2004.04.27.23.11 --

I don't claim I can see more than two. I did an experiment with
people who had never been told what to see, and the two cubes were
not even necessarily the first things they saw among the dozen or
more that a subject might see in a few minutes of looking. We just
asked them to say what they saw.

That's fascinating, but I'm puzzled by your finding people who did not
instantly recognize a line-drawing of a cube. That probably just shows how
long it has been since I first saw such a drawing. Nevertheless, if I were
shown that drawing and asked what I saw, I might well think to myself,
"Well, it's obviously a cube, but what else does he want me to describe
about it?" So you might think I didn't see the cube at first. Convince me
that this wasn't the case. Were these people like Inuit who didn't live in
a rectilinear world?

I've heard of the experiment with the repeating words on tape, but there
the illusion of hearing other words doesn't develop right away -- doesn't
it take many repetitions before the extraneous words start to appear?

I have a suspicion that the influence of what we believe on what we
perceive arises mainly under conditions where perception is difficult, or
else actually ambiguous as in the Necker cube. I should think there would
be a continuum from mostly perception to mostly imagination, with the
"mostly imagination" end being rare. But maybe you know different.

Best,

Bill P.

Best,

  Bill P.

Re: Ambiguous figures (was Self
Interest)

[From Bill Powers (2004.04.28.0757
MST)]

Martin Taylor 2004.04.27.23.11 –

I don’t claim I can see more than two. I
did an experiment with

people who had never been told what to see, and the two cubes were

not even necessarily the first things they saw among the dozen or

more that a subject might see in a few minutes of looking. We just

asked them to say what they saw.

That’s fascinating, but I’m puzzled by your finding people who did
not

instantly recognize a line-drawing of a cube.

I can’t go back to the original records, though they must be
around somewhere (I’m a packrat that way, but a packrat doesn’t index
its belongings). But I do remember some of the kinds of things people
said: a butterfly, a pipe opening toward you (you could call that a
cube, I guess, but since they said it as being different from a cube,
we took it as being different), a pipe opening to the left, a hexagon
with bits on it, and many other things, including the conventional
cubes.

…if I were
shown that drawing and asked what I saw,
I might well think to myself,

"Well, it’s obviously a cube, but what else does he want me to
describe

about it?" So you might think I didn’t see the cube at first.
Convince me

that this wasn’t the case.

I can’t do that. What I can do is tell you the evidence that
suggests people were not suppressing perceptions, and were telling us
as best they could what they saw.

We timed when people mentioned a change of percept, and noted
whether the change was to a form seen before or to a novel one not
previously mentioned. If there was doubt, the experimenter would ask
“is that a new one?” or something like that.

Then we treated the percepts as nodes in a graph, and transitions
as links in that graph. For a graph with N nodes, there are N(N-1)
one-way links (i.e. 2 links possible between any pair of nodes). When
you plot the number of forms mentioned up to time t against the number
of transitions up to the same moment, there’s a remarkably tight fit
– the number of transitions plots right on the curve of N(N-1). You
can see the fit, which I think would please even you as a critic of
the usual level of fit accepted in psychology studies. The experiment
and the one on repeated words were published in
Transformations of perception with prolonged observation. Canad. J.
Psychol., 1963, 17, 349-360 (with G.B.Henning), and Verbal
transformations and an effect of instructional bias on
perception.Canad. J. Psychol., 1963, 17, 210-223 (with
G.B.Henning)

Were these people like Inuit who
didn’t live in

a rectilinear world?

Housewives from the base married quarters.

I’ve heard of the experiment with the
repeating words on tape, but there

the illusion of hearing other words doesn’t develop right away –
doesn’t

it take many repetitions before the extraneous words start to
appear?

Yes it does. But once changes begin to happen, they happen at an
accelerating rate. In another study (Stochastic
processes in reversing figure perception. Perception and
Psychophysics, 1974, 16, 9-27 (with K.D.Aldridge)) looking at
the “bubbles-dents” reversal with perceived lighting
direction, we had one subject who saw no transitions until the third
day of looking at it for 36 minutes each day.

I have a suspicion that the influence of
what we believe on what we

perceive arises mainly under conditions where perception is difficult,
or

else actually ambiguous as in the Necker cube. I should think there
would

be a continuum from mostly perception to mostly imagination, with
the
“mostly imagination” end being
rare. But maybe you know different.

I don’t know anything, but my suspicion is the same as
yours.

However, I would add something else. In our experiments, we had
great difficulty finding ANY configuration that a naive oberver saw in
only two configurations, and we tried pretty hard. The bubbles-dents
was the only one we could find, and we made that by physically
pounding a sheet of grey plasticene with a table-tennis ball and
side-lighting it in a box that allowed no view of its edges. We
couldn’t do it with any line drawing, or even most photographic
images.

So, what I would add is that I suspect that when one is receiving
the same data for a long time, it reorganizes (in both the PCT sense
and the everyday sense). Some people take longer than others for this
to happen, or perhaps at different times and in different conditions a
person may take more or less time.

The issue in the bubbles-dents study was to refute the then (and
I think still) conventional view that the changes in the Necker Cube
were due to some kind of adaptation or fatigue. We needed precise
timing data, which we then modelled. The model asserted that the
consciously perceived form was the result of a kind of vote among a
set of lower-level analysers that changed their vote randomly (Poisson
process). If the vote total was above some upper bound, form A would
be seen. If it was below some lower bound, form B would be seen, and
if it was between the two bounds, the form most recently seen would
prevail.

Using this model, we were able to fit some rather dramatic
changes in the survivorship curves on the assumption that one of the
parameters had changed by exactly one unit. The parameters were the
total number of voting units (in the region of 30 for each of the
subjects we analyzed fully), the location of the upper boundary, and
the location of the lower boundary. I think there was one occasion
when we needed to change two parameters at once to model the change in
the curve.

Ambiguous figures are very interesting, but what is more
interesting is that it is so hard to find an input pattern that is not
ambiguous.

Martin

From[Bill Williams 28 April 2004 9:30 AM CST]

[From Bill Powers (2004.04.28.0757 MST)]

Martin Taylor 2004.04.27.23.11 --

In commenting on Martin's report of his experiments, Bill Powers says,

That's fascinating, but I'm puzzled by your finding
people who did not instantly recognize a line-drawing >of a cube.

It might help matters if Bill Powers would exert some effort to dislodge his
implicit assumption that everyone has a BA or BS in engineering or physics.

Bill Powers goes on to ask,

Were these people like Inuit who didn't live in a >rectilinear world?

I'm coming to the conclusion that even though economists are typically
reasonably well trained in regard to quantitative skills, many of them are
so lacking in an understanding of the world in way that is typical of an
engineer or physicist that the presentation which is characteristic of
_B:CP_ is simply not intelligible-- not without the world view which _B:CP_
assumes.

Bill Williams

···

----- Original Message -----
From: "Bill Powers" <powers_w@FRONTIER.NET>
To: <CSGNET@listserv.uiuc.edu>
Sent: Wednesday, April 28, 2004 9:12 AM
Subject: Re: Ambiguous figures (was Self Interest)

[From Bill Powers (2004.04.28.0757 MST)]

Martin Taylor 2004.04.27.23.11 --

>I don't claim I can see more than two. I did an experiment with
>people who had never been told what to see, and the two cubes were
>not even necessarily the first things they saw among the dozen or
>more that a subject might see in a few minutes of looking. We just
>asked them to say what they saw.

That's fascinating, but I'm puzzled by your finding people who did not
instantly recognize a line-drawing of a cube. That probably just shows how
long it has been since I first saw such a drawing. Nevertheless, if I were
shown that drawing and asked what I saw, I might well think to myself,
"Well, it's obviously a cube, but what else does he want me to describe
about it?" So you might think I didn't see the cube at first. Convince me
that this wasn't the case. Were these people like Inuit who didn't live in
a rectilinear world?

I've heard of the experiment with the repeating words on tape, but there
the illusion of hearing other words doesn't develop right away -- doesn't
it take many repetitions before the extraneous words start to appear?

I have a suspicion that the influence of what we believe on what we
perceive arises mainly under conditions where perception is difficult, or
else actually ambiguous as in the Necker cube. I should think there would
be a continuum from mostly perception to mostly imagination, with the
"mostly imagination" end being rare. But maybe you know different.

Best,

Bill P.

Best,

  Bill P.

Martin, have you ever worked with the Rotating Trapezoid illusion. That's my favourite.
cheers, David

···

On Wednesday, April 28, 2004, at 07:57 AM, Martin Taylor wrote:

Martin Taylor 2004.04.27.23.11 --

Ambiguous figures are very interesting, but what is more interesting is that it is so hard to find an input pattern that is not ambiguous.

Martin

Dr. David Wolsk
Associate, Centre for Global Studies
Adjunct Professor, Faculty of Education
University of Victoria, Canada

From{Bill Williams 29 April 2004 7:40 PM CST]

···

----- Original Message -----

From:
David Wolsk

To: CSGNET@listserv.uiuc.edu

Sent: Thursday, April 29, 2004 5:26 PM

Subject: Re: Ambiguous figures (was Self Interest)

Martin, have you ever worked with the Rotating Trapezoid illusion. That’s my favourite.
cheers, David

David, could you supply the name of the guy who did the work with the rotating trapezoid illusion? I’ve been trying to think of his name.

Bill Williams

[Martin Taylor 2004.05.05.10.57]

Martin, have you ever worked with the Rotating Trapezoid illusion.
That's my favourite.
cheers, David

Martin Taylor 2004.04.27.23.11 --

Ambiguous figures are very interesting, but what is more
interesting is that it is so hard to find an input pattern that is
not ambiguous.

Martin

Dr. David Wolsk
Associate, Centre for Global Studies
Adjunct Professor, Faculty of Education
University of Victoria, Canada

No, but we had a related mobile one. We made up a horizontal disk
with six (I think) vertical spikes sticking up from its periphery.
The subject viewed this through a slot that allowed sight only of the
middle section of the spikes as the disk rotated. People see lots of
different things in this, including bouncing spikes and partial
rotation reversals. I imagine the trapezoid would be similar, if you
don't pre-bias the subjects.

The problem with using mobile reversing figures is that the timings
of the perceptual changes are likely to be influenced by the period
of the physical motion. We could eliminate any adaptation-based
explanations ("satiation", "fatigue") of the changes by looking at
the probabilities of return to the most recently seen form as
compared to going on to a form seen in the more distant past, using
any of the multi-form ambiguous figures. What we couldn't do is to
get a precise fix on the timings of perceptual changes, which we
could then model.

It was the modelling of the changes in our "bubbles-dents" figure
that allowed us to suggest that one of our subjects used 26
lower-level voting units, while the other that we modelled carefully
used 33 (Taylor and Aldridge, "Stochastic processes in reversing
figure perception", Perception and Psychophysics, 1974, 16, 9-27).
With the computers available in 1973, fitting was a time-consuming
and expensive procedure, so we only fitted the most and the least
stable of the four subjects, judging by the crude measures of
switching rate and changes of bias in what was most perceived.

Just to show the consistency, I'll give the values for the upper and
lower bounds for the 20 9-minute sessions of the two subjects (4/day,
5 days). (If the number of votes for "bubble is less than the lower
bound, the subject sees "dents". If it is higher than the upper
bound, the subject sees "bubbles". If the number of votes is between
the two bounds, the observer sees whatever she had been seeing.)

Subject "B"
N units 26 26...... (no chages)
Lower 12 13 12 12/ 13 12 12 12/ 12 12 12 12/ 12 12 12 11/ 11 11 11 11
Upper 16 16 16 16/ 16 16 16 16/ 16 15 15 15/ 16 16 16 15/ 15 15 15 15

Subject "E"
N units 33 33 .... (no changes)
Lower 14 14 14 15/ 14 15 15 14/ 14 14 15 14/ 15 16 15 15/ 15 15 15 15
Upper 19 19 19 20/ 19 20 20 20/ 20 20 21 20/ 20 21 22 23/ 21 21 22 22

These subtle changes represent quite large changes in the timing
patters, but the interesting thing is that they are all (except for
one overnight change in the upper bound of subject E) a single unit
shift of the voting criterion.

I think this modelling probably matches the usual PCT-based modelling
insofar as it defines a precise mechanistic structure, checks its
performance in simulation, and determines consistent personal
parameters, in which slight changes accurately fit quite large
changes in the performance of real individual humans.

Martin

···

On Wednesday, April 28, 2004, at 07:57 AM, Martin Taylor wrote:

(David Wolsk 2004.05.05, 14.56)

Martin Taylor 2004.04.27.23.11 --

Ambiguous figures are very interesting, but what is more
interesting is that it is so hard to find an input pattern that is
not ambiguous.

Martin

Martin, although I could appreciate your mathematical modelling
approach to the ambiguous figures you used, in my experience, the
Rotating Trapezoid illusion works at the 100 percent rate when the
viewers are at the back of the classroom and the trapezoid is painted
to look like a 3 dimensional window frame. It still works for me,
moving back and forth, no matter how hard I try to see it continuously
revolving ........ and I've used it many times.

According to the "legend" that was passed on to me, when the apparatus
was taken to African tribes with no experience of living with windows,
it worked much less but there was still success with a proportion of
the Africans.
David

Dr. David Wolsk
Associate, Centre for Global Studies
Adjunct Professor, Faculty of Education
University of Victoria, Canada

[Martin Taylor 2004.05.05.1846]

(David Wolsk 2004.05.05, 14.56)

Martin Taylor 2004.04.27.23.11 --

Ambiguous figures are very interesting, but what is more
interesting is that it is so hard to find an input pattern that is
not ambiguous.

Martin

Martin, although I could appreciate your mathematical modelling
approach to the ambiguous figures you used, in my experience, the
Rotating Trapezoid illusion works at the 100 percent rate when the
viewers are at the back of the classroom and the trapezoid is painted
to look like a 3 dimensional window frame. It still works for me,
moving back and forth, no matter how hard I try to see it continuously
revolving ........ and I've used it many times.

According to the "legend" that was passed on to me, when the apparatus
was taken to African tribes with no experience of living with windows,
it worked much less but there was still success with a proportion of
the Africans.

I can quite believe you, if you've tried it with observers who
haven't had any prior suggestion as to what they should see. We never
used it. In all the other figures except the bubbles and dents, we
found that people who had seen the figure before, or who had been
told what to expect, were more likely to see only the "conventional"
forms, whereas naive observers saw all sorts of unexpected things.
Maybe the rotating trapezoid is like the bubbles and dents, easily
seen in only two ways, even by naive observers.

Incidentally, I should correct something I said earlier, having
reread our report. Almost everyone looking at the Necker cube saw a
3-D shape as their first, whereas for the hexagon with diagonals
drawn (which also can be seen as a cube), only 60% saw a 3-D figure
first. I think I said that flat forms were equally likely to be seen
for both figures.

My main point in bringing up these old studies was to support the
point someone made, that expectation can play a substantial part in
what is seen. It isn't well specified by the sensory data.

Martin

[FVrom Bill Powers (2004.05.05.1836 MST)]

Martin Taylor 2004.05.05.1846--

My main point in bringing up these old studies was to support the
point someone made, that expectation can play a substantial part in
what is seen. It isn't well specified by the sensory data.

Wouldn't "expectation" translate into "reference signal?" You see what you
want to see, provided there's enough ambiguity to give imagination control
over which of several different perceptions might be present. If you want
to see the Necker cube in a specific orientation, you imagine the specific
depth perceptions that will make it so.

Best,

Bill P.

[Martin Taylor 2004.05.05.2306]

[FVrom Bill Powers (2004.05.05.1836 MST)]

Martin Taylor 2004.05.05.1846--

My main point in bringing up these old studies was to support the
point someone made, that expectation can play a substantial part in
what is seen. It isn't well specified by the sensory data.

Wouldn't "expectation" translate into "reference signal?" You see what you
want to see, provided there's enough ambiguity to give imagination control
over which of several different perceptions might be present. If you want
to see the Necker cube in a specific orientation, you imagine the specific
depth perceptions that will make it so.

I'm not sure I follow this. I've always thought of a reference signal
as something that is compared with the perceptual signal, and an
action consequent on the error then alters the peceptual signal to
bring it (one hopes) closer to the reference value. You seem to be
suggesting that the reference signal itself contributes directly to
the perceptual signal.

Martin

[From Bill Powers (2004.05.06.0551 MST)]

Martin Taylor 2004.05.05.2306 --

I'm not sure I follow this. I've always thought of a reference signal
as something that is compared with the perceptual signal, and an
action consequent on the error then alters the peceptual signal to
bring it (one hopes) closer to the reference value. You seem to be
suggesting that the reference signal itself contributes directly to
the perceptual signal.

Yes, that's how I think of it, too. If the perception doesn't match the
reference signal, the error signal is distributed to lower systems whose
control actions have effects that contribute to the state of the perceptual
signal. If you want to see the Necker cube as having a particular corner
closest to you, but the actual perception fails to match that reference
because of a lack of depth information, a lower system can be used in the
imagination mode to supply the missing depth information that will make the
selected corner appear to be closer than the other corners. Then the cube
will appear, at the higher level, to be in the desired, intended, or
"expected" orientation -- as I am suggesting we interpret the term "expected."

With respect to your comments about your earlier experiements, I'm curious
about the concept of "voting units." This would seem to imply (as I
understand voting) that each unit is aware of possible alternate
interpretations, but perhaps uses different criteria as the basis for
choosing between them. Isn't this more like the "coding" model of
perception, in which each perceptual signal can represent several different
classes of perception, as opposed to the model used in PCT where one signal
always indicates one perception (barring reorganization of the PIF)?

Best,

Bill P>

[From Bruce Nevin (2004.05.06 09:29 EDT)]

Bill Powers (2004.05.06.0551 MST)--

If the perception doesn't match the
reference signal, the error signal is distributed to lower systems whose
control actions have effects that contribute to the state of the perceptual
signal. If you want to see the Necker cube as having a particular corner
closest to you, but the actual perception fails to match that reference
because of a lack of depth information, a lower system can be used in the
imagination mode to supply the missing depth information that will make the
selected corner appear to be closer than the other corners. Then the cube
will appear, at the higher level, to be in the desired, intended, or
"expected" orientation -- as I am suggesting we interpret the term "expected."

What causes the lower system to go into imagination mode?

Perhaps it is always receiving a copy of the reference signal. Then when
the higher-level system controls its input by means of the lower system in
the absence of perceptual input from the environment (the input whose
absence makes the figure ambiguous), that copy of the reference input is
all that there is to control. When there is input from the environment, on
the other hand, the copy of the reference input has little influence on the
weighted sum of all inputs to the lower system's PIF.

Is this a better account than the switch diagram? I could never figure out
how the higher system could throw the switches on all the appropriate lower
systems.

This also helps to explain why imagined control is less "vivid" an
experience than control of "real" input. By this hypothesis, the copy of
the reference input has lower amplitude than other inputs to the PIF. Other
observations might have an explanation here, e.g. ome people have a more
vivid imagination (less difference in amplitude), imagination can be
cultivated, hallucinogens affect synapses & hence signal amplitude, and so on.

         /Bruce Nevin

···

At 06:01 AM 5/6/2004 -0600, Bill Powers wrote:

Martin,

You might find this paper interesting to read in respect of your
discussion in this thread:

Emotion is Essential to All Intentional Behaviors by Walter J Freeman
http://members.shaw.ca/competitivenessofnations/Anno%20Freeman%20Emotion%20is%20Essential%20a.htm

BTW there is an animated Necker cube, illustrating how the
perception can be controlled using objects that pass by or though it:
http://dogfeathers.com/java/necker.html

Peter Small

Author of: Lingo Sorcery, Magical A-Life Avatars, The Entrepreneurial
Web, The Ultimate Game of Strategy and Web Presence
http://www.stigmergicsystems.com

···

--

Re: Ambiguous figures (was Self
Interest)
[Martin Taylor 2004.05.06.0940]

[From Bill Powers (2004.05.06.0551
MST)]

With respect to your comments about your
earlier experiements, I’m curious

about the concept of “voting units.” This would seem to
imply (as I

understand voting) that each unit is aware of possible alternate

interpretations, but perhaps uses different criteria as the basis
for

choosing between them. Isn’t this more like the “coding”
model of

perception, in which each perceptual signal can represent several
different

classes of perception, as opposed to the model used in PCT where one
signal
always indicates one perception (barring
reorganization of the PIF)?

We had no committment to any model of how the “voting units”
operated. All that mattered to the model was that any of 26 (or 33 for
the other subject) inputs to whatever created the conscious perception
could switch according to a Poisson process (i.e. without memory of
when it last switched). All I know is that the model gave surprisingly
good fits to the data. Here is a figure (in-line and as an attachment)
showing the fits for Subject B (the most variable of the two), who
each day looked four times for 9 minutes straight, with a 3-minute
break between 9-minute periods. We fit whole 9-minute chunks, so if
there was a parameter change during the chunk, it would spoil the fit.
That may have happened during the first 9-minute chunk on days 2 and
3.

The thing to note is hpw the two curves representing the bubble
phase and the dent phase split apart and come together, and how those
changes are followed by changing one parameter by one unit in the
model.

What the “voting units” and the “electoral officer
unit” are is an open question. I won’t argue for any particular
functionla description based on these data. What I do argue is that
the flips of perception have nothing to do with “satiation”,
“adaptation”, “fatigue” or any related
process.

Martin

[Martin Taylor 2004.05.06.10.14]

Martin,

You might find this paper interesting to read in respect of your
discussion in this thread:

Emotion is Essential to All Intentional Behaviors by Walter J Freeman
Shaw Communications

Thanks. I'm always interested to read what Freeman has to say. So
far, I've just seen the abstract, but it does sound as if he is
taking both the chaos and the PCT view together, as I have argued we
ought to do.

Here's a bit of the abstract:

···

-------
Emotion is defined as a property of intentional behavior. The
widespread practice of separating emotion from reason is traced to an
ancient distinction between passive perception, which is driven by
sensory information from the environment, and active perception,
which begins with dynamics in the brain that moves the body into the
environment in search of stimuli. ... An essential part of
intentionality is learning from the sensory consequences of one's own
actions. ... The distinction between "rational" versus "emotional"
behaviors is made in terms of the constraint of high-intensity
chaotic activity of components of the forebrain by the cooperative
dynamics of consciousness versus the escape of subsystems owing to an
excess of chaotic fluctuations in states of strong arousal.
-------
I'll read the chapter with interest. From the abstract, however, I'm
not clear how it will relate to the timing patterns of reversing
figures. We observed little overt evidence of heightened emotional
states in our subjects when they were pushing the microswitch
buttons. Amusement and surprise, perhaps, when they first saw the
perception unexpectedly switching, but not much else on the surface.

Martin

[From Bill Powers (2004.05.06.0844 MST)]

Bruce Nevin (2004.05.06 09:29 EDT)--

If the perception doesn't match the
reference signal, the error signal is distributed to lower systems whose
control actions have effects that contribute to the state of the perceptual
signal. If you want to see the Necker cube as having a particular corner
closest to you, but the actual perception fails to match that reference
because of a lack of depth information, a lower system can be used in the
imagination mode to supply the missing depth information that will make the
selected corner appear to be closer than the other corners. Then the cube
will appear, at the higher level, to be in the desired, intended, or
"expected" orientation -- as I am suggesting we interpret the term
"expected."

What causes the lower system to go into imagination mode?

I don't know. Another way of handling error at a higher level?

Perhaps it is always receiving a copy of the reference signal. Then when
the higher-level system controls its input by means of the lower system in
the absence of perceptual input from the environment (the input whose
absence makes the figure ambiguous), that copy of the reference input is
all that there is to control.

You'll have to explain this to me. Are you proposing that the reference
signal entering the higher system also enters the lower systems, so the
lower systems' reference signals are composed of the reference signal
entering the higher system and the output signals produced by the higher
system? Perhaps we need a diagram here. A major problem with this
arrangement would be that the same signal would have two different
meanings. You'd have to show me how this is supposed to work.

I was simply trying to describe the normal mode of operation of a control
system, with (in this case) one or a few of the lower systems providing the
information needed to disambiguate the higher perception. That requires
some of the lower systems to be in the imagination mode when normal depth
information is missing. For the case in question there is no actual depth
information in the picture. In order to perceive the figure in three
dimensions at all, it is necessary to supply the kind of depth information
we would get from binocular vision or shading in the case of a real cube.
If we do succeed in seeing the figure as three-dimensional, we must have
supplied ourselves with the missing lower-order perceptions by some means.
My suggestion is one possible means.

When there is input from the environment, on
the other hand, the copy of the reference input has little influence on the
weighted sum of all inputs to the lower system's PIF.

Is this a better account than the switch diagram? I could never figure out
how the higher system could throw the switches on all the appropriate lower
systems.

Neither could I. But any scheme that allows both real-time and imaginary
signals to contribute to the same perceptual signal at the same level has a
big problem -- how does the controller know whether any error is real or
partly imagined? All it knows is the net perceptual signal, which it brings
to whatever reference level is given. If, for example, 50% of the
perceptual signal is coming from imagination, then when the perceptual
error appears to be zero, the real perceptual signal will be only half as
large as the reference signal, the other half being supplied by the
imagination connection. I can't see how that would serve any useful purpose.

This, by the way, would be a practical problem if a control task required
adjusting the depth dimension when there was actually no depth dimension.

For that sort of reason I decided that any one perceptual signal must be
either completely real-time, so it correctly represents some aspect of the
perceived world, or completely imaginary, so it can be arbitrarily
manipulated without requiring action. That's what led to the switching
idea, the switches being a way to make the choices mutually exclusive. Of
course when a higher-level perceptual signal is being controlled, the many
lower-level control systems whose perceptions are copied to the higher
system can each be either in the imagination mode or the real-time mode.
The higher perception can thus depend both on real and imagined information
from a lower level. However, it is not the _same_ perception that is
partially imaginary and partially real at the same level.

This also helps to explain why imagined control is less "vivid" an
experience than control of "real" input.

I think that vividness is a matter of the level at which the imagination
connection exists. If you imagine at the lowest possible level (sensations
or intensities), the result would probably be called a hallucination
(hearing voices, feeling bugs crawling on you).

By this hypothesis, the copy of
the reference input has lower amplitude than other inputs to the PIF.

In the PCT model there is only one dimension to any neural signal,
frequency of firing -- the famous "all-or-none" property of neural impulses
prevents changes in the amplitude from one impulse to the next. So
frequency is what corresponds to "amplitude." But that one dimension has to
represent the range over which the perception in question can vary: setting
a reference signal means setting it to a particular frequency, and making
the perceptual signal match the reference signals means making its
frequency of firing the same as the frequency of the reference signal's
impulses. That uses up all the degrees of freedom of the signals: there is
no dimension left to cover any changes of amplitude of the sort you appear
to be suggesting.

The chemical changes you suggest (halloucinogens) would apply to the
effects of neurotransmitters as they diffuse into a receiving neuron's cell
body. That would alter the frequency of the output signal from the cell,
but again, that is like a change in the normal neural signal from one
magnitude to another magnitude, a one-dimensional change as usual. Another
effect hallucinogens might has would be to bias those "switches" so they
are thrown into the imagination mode; any attempt at action then results in
imaginary changes in perceptual signals instead of real ones. If you know,
at a higher level, that this has happened, you may be entertained by the
result but will not be taken in by it. But if you think the perceptual
effects are related to reality, you will be in big trouble.

Best,

Bill P.

[From Bill Powers (2004.05.06.1144 MST)]

Martin Taylor 2004.05.06.0940--

With respect to your comments about your earlier experiements, I'm curious
about the concept of "voting units." This would seem to imply (as I
understand voting) that each unit is aware of possible alternate
interpretations, but perhaps uses different criteria as the basis for
choosing between them.

We had no committment to any model of how the "voting units" operated. All
that mattered to the model was that any of 26 (or 33 for the other
subject) inputs to whatever created the conscious perception could switch
according to a Poisson process (i.e. without memory of when it last switched).

What did they switch from, and to? It's hard for me to grasp the model
behind the study without understanding what you thought was going on. Was
each voter trying to disambiguate the figure independently?

All I know is that the model gave surprisingly good fits to the data.

That might or might not be significant -- it depends on whether you're
actually predicting something and could be wrong, or only fitting curves in
an indirect way.

Here is a figure (in-line and as an attachment) showing the fits for
Subject B (the most variable of the two), who each day looked four times
for 9 minutes straight, with a 3-minute break between 9-minute periods. We
fit whole 9-minute chunks, so if there was a parameter change during the
chunk, it would spoil the fit. That may have happened during the first
9-minute chunk on days 2 and 3.

Would it be relatively easy to explain the underlying model and what these
curves mean, or do I have to study the original article? I can't really
tell what I'm looking at in these plots.

[Martin taylor 2004.05.06.1412]

[From Bill Powers (2004.05.06.1144 MST)]

Martin Taylor 2004.05.06.0940--

With respect to your comments about your earlier experiements, I'm curious
about the concept of "voting units." This would seem to imply (as I
understand voting) that each unit is aware of possible alternate
interpretations, but perhaps uses different criteria as the basis for
choosing between them.

We had no committment to any model of how the "voting units" operated. All
that mattered to the model was that any of 26 (or 33 for the other
subject) inputs to whatever created the conscious perception could switch
according to a Poisson process (i.e. without memory of when it last
switched).

What did they switch from, and to? It's hard for me to grasp the model
behind the study without understanding what you thought was going on. Was
each voter trying to disambiguate the figure independently?

Yes, allowing "trying" as some kind of metaphor.

All I know is that the model gave surprisingly good fits to the data.

That might or might not be significant -- it depends on whether you're
actually predicting something and could be wrong, or only fitting curves in
an indirect way.

All modelling is doing both, I think. There's absolutely no reason a
priori to expect that one could even come close to fitting those
curves using this kind of a hysteresis model, let alone to fit
substantial block-to-block variation by single unit changes in
parameter values, as actually is always the case except for on
overnight change of 2 units in one parameter for one of the subjects.
There are lots of ways a priori that the nature of the fit could be
wrong, and the unit shifts, as well as the moderately low integer
number of units was a major surprise.

Would it be relatively easy to explain the underlying model and what these
curves mean, or do I have to study the original article? I can't really
tell what I'm looking at in these plots.

The model says that there exists some construction that I call a
unit, that can report "bubble" or "dent". As input it has a number of
independent connections to the outputs of lower-level units, each of
which also can report "bubble" or "dent".

We make no specification of how those lower-level units operate.
Perhaps they look at different regions of the display, perhaps they
all look at the whole display. Perhaps a single lower-level unit is
actually two separate units, one of which reports high when it sees
bubbles and the other reports high when it sees dents. leaving the
higher-level unit to have an input function tat chooses the greater
of the two. None of this is part of the model. All that matters to
the model is that each lower level unit can independently change from
reporting bubbles to reporting dents, and back again, and those
changes occur according to a Poisson process (i.e. randomly, without
memory of when the previous change in that lower-level unit occurred).

The higher-level unit reports Bubbles (i.e. the person consciously
sees bubbles) when the number of lower-level units reporting dents is
below the lower bound, and it reports dents when the number of
lower-level units reporting dents is above the upper bound. When the
number is between the two bounds, it sees whatever it most recently
has been seeing.

The panels in the diagram show the distribution of times it takes for
the person to report a switch out of a bubble perception or out of a
dent perception. They are in the form of what is called "mortality
curves", meaning that what is shown on the x axis is time since the
previous switch, and on the y axis the probability that no further
switch has occurred (actually, log probability, in these figures, so
that a straight line would indicate a constant probability of
switching as a function of time since the last switch, after a
possible refractory period). In each panel, the time scale is such
that the x axis is 7.5 seconds long.

There are four curves in each panel. One is the mortality curve if
the person is seeing bubbles, and the second is the same for dents.
The others are the model fits to those two curves using the parameter
values noted in the panel itself. The thing to look for is how the
bubbles curve and the dents curve separate and approach one another
both in the experimental data and with unit changes in the parameter
values of the model.

I hope this helps.

Martin

[From Bruce Nevin (2004.05.06 17:15 EDT)]

Bill Powers (2004.05.06.0844 MST)–

You’ll have to explain this to me. Are you
proposing that the reference

signal entering the higher system also enters the lower systems,

No, I didn’t mean to imply any skipping of levels.

any scheme that allows both real-time and
imaginary

signals to contribute to the same perceptual signal at the same level has
a

big problem – how does the controller know whether any error is real
or

partly imagined?

Since the imagined signal is a copy of the reference signal, there can be
no “imagined error”. Any error must be due to the difference
between the “real” input and the reference signal.

when a higher-level perceptual signal is being
controlled, the many

lower-level control systems whose perceptions are copied to the
higher

system can each be either in the imagination mode or the real-time
mode.

The higher perception can thus depend both on real and imagined
information

from a lower level.

As a process of thinking this through carefully indulge me in a review of
some basics. I don’t pretend to be telling you anything you don’t know,
and I may expose some things that I think I know that are mistaken.
Suppose a perceptual input function (PIF) at level n constructs a
perceptual input signal from the signals controlled by 10 elementary
control systems (ECS) below it at level m. Each contributes, on
average, 10% of the weighted sum of perceptual inputs in the PIF. For
ECS<n> to control by means of these several
ECSes<m>, the reference input function (RIF) of each must
each get a copy of its error output signal. The most straightforward
supposition is that for every afferent connection from lower PIF to
higher PIF there is a corresponding efferent connection from error output
to RIF, and I see no obvious reason why less parsimonious arrangements
that possibly occur in nature could not be modeled in this way.
In each case, the input function (PIF or RIF) determines the contribution
of a given signal to the signal that it constructs from them.
Any given RIF at the lower level is not limited to getting input from
just this one higher-level ECS. The value of the reference signal input
to a given ECS<m>, and consequently the value at which it
controls its perceptual input, is constructed from all the error signals
entering the RIF. If signals from two higher-level ECSes conflict, the
result is control at a value intermediary between the conflicting values.
If every higher-level ECS that is controlling a copy of the perceptual
signal p passed to it by this ECS is controlling it perfectly,
then no other error output signal enters its RIF to influence the value
of its reference signal. Of course, the most sure way to control a
perceptual input perfectly is to control it in imagination.
The same principles apply to the PIF for ECS<n> at the
higher level. It constructs a net perceptual input signal from the
several perceptual input signals passed to it from below. The value of
that net perceptual input signal, the controlled perceptual input at this
higher level, is constructed from all the lower-level perceptual input
signals entering the PIF.
For imagination to work, the copy of the reference signal would have to
enter the PIF above the point where this summation takes place. With pure
imagination, the weighted sum of signals from below has no influence. But
as ECS<n> controls its input in imagination, each tributary
ECS<m> is also in imagination mode. The perceptual signals
that they pass up to the PIF of ECS<n> are copies of their
reference input, that is (absent conflict), the result of a null error
signal entering the RIF. The RIF constructs a reference perception, and
this construction is modulated by input error signals. Controlling in
imagination, ECS<m> passes a copy of that constructed signal
up to the PIF of ECS<n>. The weighted sum of signals from
below still has no influence, that is, it is identical to the copy of the
reference signal in ECS<n>.
It is only when at least one ECS<m> is controlling
“real” input that it is possible for the perceptual signal
constructed by the PIF of ECS<n> to depart from the
reference signal.

All it knows is the net perceptual signal,
which it brings

to whatever reference level is given. If, for example, 50% of the

perceptual signal is coming from imagination, then when the
perceptual

error appears to be zero, the real perceptual signal will be only half
as

large as the reference signal, the other half being supplied by the

imagination connection.

Suppose that 50% of the perceptual signal is always a copy of the
reference input. Since this is always the case, an error output
function could weight the comparator output accordingly. There can be no
doubt that control systems have developed in whatever way is necessary so
that they can control.

I can’t see how that would serve any useful
purpose.

The purpose is a mechanism for imagination. If this works, then we need
no additional mechanism to explain how the higher ECS “throws”
the imagination switch in each appropriate lower ECS. All that is needed
is absence of “real” input, and by default it is the copy of
the reference input that is controlled.
We agree on the subjective experience that imagined sensory input is
weaker than “real” sensory input. I attributed this to the
imagined perceptual signal being somehow weaker. I see that according to
our understanding of perceptual signals in PCT this is not possible. All
we have is a rate of firing: faster is more of the given perception, and
slower is less. And the subjective impression that imagined perceptions
are less vivid must be due to absence of inputs at lower levels. This is
consistent with an abundant literature on the importance of sensory
detail for a variety of exercises involving imagination.
I said above that “as ECS<n> controls its input in
imagination, each tributary ECS<m> is also in imagination
mode.” This implies a cascading of imagination-control down through
the hierarchy as a higher-level perception is controlled in
imagination.

This has the affect of alertness for input matching an expectation, as
sensors are mobilized to control their inputs.

A corollary is that every ECS is always controlling at least the
“imaginary” copy of its reference input. The absence of sensory
detail when we pay attention to a higher-level imagined perception is not
because sensory signals at the lower levels are absent, but rather
because they are presently being controlled at values other than those
that contribute to successful control at the higher level. This means of
course that there is a conflict between the higher-level system that is
controlling in imagination and whatever other systems are controlling by
a loop that actually closes through the environment. Summing to an
intermediate value satisfactory to neither is not the only way to resolve
conflict, another way is to lower the gain on one system. The cascading
downward of imagined control stops where conflicts are resolved by
lowering the gain on systems that are controlling in imagination, in
favor of the systems with which they are in conflict.

This is consistent with the common observation that imagination is
facilitated by minimizing disturbances of perceptions from the
environment.

This, by the way, would be a practical problem
if a control task required

adjusting the depth dimension when there was actually no depth
dimension.

Now that’s a neat idea – I wonder if it is possible to control the
perception of depth projected on a 2D figure. The Ames illusions suggest
not. Isn’t the perception of changing relative size there said to be
completely involuntary, not amenable to control? Now if it is possible to
switch from a perception of size change (contrary to conservation) to a
perception of distance change (contrary to apparent context), that would
be interesting indeed. Has anyone here actually experienced the Ames room
or one of his other illusions?

I wonder if a Necker cube could be distorted into a lozenge or diamond
shape projecting farther forward/back in the third dimension, and the
user given control of this distortion, either to maintain the cube
against disturbance, or to manipulate the cube to project more or less in
the 3rd dimension.

    /Bruce

Nevin

···

At 09:27 AM 5/6/2004 -0600, Bill Powers wrote: