Square circle Demo revisited

[From Bill Powers (2010.03.20.0415 MDT)]
I have discovered something new, and obvious, about one of the demos in
LCS3, that I have overlooked for three years or so (and so, apparently,
has Bruce Abbott, who recoded this demo in Object Oriented form). It will
take a while to describe the demo and what I missed about it.
The demo in question is called SquareCircle (Demo 9-1). There are two
views, toggled by typing a v. In the initial view, a white dot and a red
square are visible, and the task is to use the mouse to make the white
dot trace out the square, staying as close to the square as possible
(leaving a trail so you can see where the white dot went). This is rather
difficult because the connection between the mouse position and the
position of the white dot is not direct – it involves a time integration
in a rather complicated way that you can read about in the book.
The instructions say to type v when the tracing is complete. This shows
(in blue) the path actually taken by the mouse, and also shows a yellow
circle. The blue path of the mouse stays close to the yellow
The point of this demo is to illustrate what is meant by saying that
behavior is the control of perception. The white dot is controlled so it
moves around the square, but you have to move the mouse in a circle to
accomplish that. Your behavior, moving the mouse, does not resemble the
controlled result of moving the mouse. Yet everyone sees the movements of
the white dot as representing what they are “doing” in this
task. Nobody, when first doing this task, has any idea that they are
moving the mouse in a circle.
To emphasize this, the instructions then say to make the blue dot trace
out the yellow circle while staying in the view where those can be seen.
This is far easier since the blue dot represents where the mouse actually
is. After this trace is complete, the instructions say to type v again to
toggle to the other view – and it’s seen that the white dot most
definitely does not trace out the square. The aforementioned time
integration sees to that.
There is also a “direct” mode (toggled by typing D) in which
causing the mouse to trace out the yellow circle or triangle (while
looking at that view) will cause the white dot to trace the red square in
the other view, and vice versa. In the direct mode (the word
“DIRECT” appears on the screen), the radial distance of the
blue dot relative to the center of the yellow circle is automatically
converted, in the other view, to the radial distance from the center of
the square, scaled so that when the blue dot is on the yellow circle, the
white dot is on the red square. This is meant to show that with a
different transformation linking the mouse movements to the white dot,
the mouse movements will cause the red square to be traced out just as if
it could be seen. Toggling the view by typing v shows this to be the
case. This shows that when the behavior matches its result, it is true
that we seem to be controlling both the behavior and its result as we
normally assume.
And it also shows why the discovery that behavior is the control of
perception didn’t happen long ago.
So far so good; all that is in the book. But what isn’t in the book is an
instruction to go back to the first view in the DIRECT mode and make the
white dot trace the red square again. I said “… and vice
versa” in the book just as I said above, but I didn’t actually
try the “vice versa.” I haven’t heard from anyone else
who has tried it, either.

When you leave the mode in “DIRECT” and go back to the
red-square view by typing v, and type space to clear the traces away, you
can try tracing the square again. It’s still not easy, but it’s
considerably easier because that time-integration isn’t there: when you
stop moving the mouse, the white dot stops instantly.

The main difference is that when you toggle the view to show where the
mouse actually moved, the blue trace is a much better fit to the yellow
circle than it was in the other, fancier, mode. Somehow I never checked
this. I was thinking that in the DIRECT mode, “of course” the
dots would be on their respective square or circle, without reflecting
that in the red-square view, you still can’t see how the mouse is moving.
The control-of-perception effect is even more pronounced in the DIRECT
mode. If I had tried that mode first, I probably wouldn’t have used the
more complex transformation.

However, the other mode is still useful in showing that with a dynamic
transformation between mouse position and position of the white dot, you
can control the white dot in the red-square view while the mouse moves in
a path close to the circle, but if you make the mouse trace out the
circle in the other view, the result is not to make the white dot trace
the red square.

All this is much easier to understand by actually doing the demo instead
of trying to imagine it while reading these words.

All this works just the same when a triangle instead of a circle is used
as the template (select by typing T or C), though the corners of the
triangle cause great difficulties.

I suppose that if one really practiced this task, proficiency would
improve greatly, especially in the Triangle version. Since the mapping
between visual and kinesthetic space is involved, it’s quite possible
that going back to normal tracing (in the yellow circle or triangle view)
would start to show some problems at the corners of the triangle after
one had learned to do it by practicing with the red-square view long
enough. The situation is quite similar to the phenomena seen when a
subject wears prism spectacles that invert or otherwise distort the
relationship between kinesthetic and visual space. Here is a nice subject
for a thesis!


Bill P.