NYTimes.com: Cells That Read Minds

[From Bryan Thalhammer (2006.01.14.1110 CST)]

Well, I agree that we can't use these studies directly. I was thinking of how a neuron gets activated, and regardless for the moment the PCT explanation, while some studies have been able to isolate a single neuron and do some frankly amazing things (control of a computer program through the precise activation apparently by the owner), we may not be able to know where it comes from.

So, ok, there is this neuron and a sensor that purportedly can identify a change in biochemical electrical effects? (Is that a way to describe that?) But, it is uncertain to me:

* That this neuron is normally part of the sensing process, or there is a mistaken identity, such as in a criminal lineup, or whether the neuron got pushed ahead of the line by the interaction of the experimenter, subject, and technology, all in the good cause of relieving a problem. (No harm there, so far, in the cause of science).

* That this neuron is at the beginning of a wave of activation, somewhere in the middle, or at the end... So, is it guilt by prime cause, by association, or by the domino effect of a wave of activation that naturally is part of our neural enviroment (I think of the cuttlefish shifting colors in an instant). (Still no harm here.)

* We (Dick, et al.) have identified that something interesting had happened rhere. However, as Rick says below, there is no sense in backwards engineering the results of a study with different conclusions to a model of our own paradigm. There is nothing here but an interesting question to model independently? Am I right? However, a tough spot for gathering evidence is measuring brain activty with such crude instruments as we currently have: Measuring activation is like listening to a nerve fire. They all sound the same "= = = = = = = = = = = = = = = = =..." We really can't know the perceptual value of that activation.

So, as I wrote, the article might be an incentive to pick away at the question from a PCT modeling approach. And of course, that is where I turn back into a spectator. But I think it is a worthwhile consideration to explain what happens when a passive observation by S1 of an action performed by S2 may result in similar internal events in both S1 and S2. Forget the problem of the bad measurements, but consider that two experiences have a similar structure, except one is not expressed into the physical environment. To heck with the conclusions of the original authors. But what would be our conclusions is my point.

--Bryan

[Rick Marken (2006.01.14.0840)]

Here's a post from yesterday that didn't make it through because I
posted it from the wrong address:

···

-----

[From Rick Marken (2006.01.13.1100)]

Richard Kennaway

Is there less here than meets the eye? ... What am I missing?

My guess is that you are (as usual) missing nothing.

We know that people (and probably animals) sometimes imagine themselves
doing what they see others doing. PCT provides on explanation of how
this might be done. However it's done, it's done in the brain. So there
must be some physiology that underlies this phenomenon. The
physiological result described in the article _might_ be related to
this phenomenon, but -- as you so clearly point out -- it might not. I
think the main point is that you can't look to the physiology for
explanations of functional phenomena. That seems to me like looking at
the hardware of a processor to understand how a software program works.

As you say, the same physiological result may be consistent with many
different functional explanations of a phenomenon. I think physiology
can show how a particular explanation might be implemented in
"wetware", but in order to show this the physiological work must be
based on the functional model. That is, once you have a model of the
software that produces the behavior (in our case, PCT) you can see how
that software might be implemented in hardware -- what kinds of
hardware processes, like AND and NOR gates, are required to carry out
the proposed functions of the software. If the physiologists did the
research to test a particular functional model of imagination then the
results might be interesting. Of course, if they did that, then they
would surely have done the research quite differently and they would
probably not have come to rather far-fetched conclusion that there are
special "mirror cells" that detect intentions.

Best

Rick

[From Erling Jorgensen (2005.01.15 0230 EST)]

Richard Kennaway (Fri, 13 Jan 2006 16:18:09 +0000)

Your comments lead me to some reflections, not so much
about the "mirror neurons" research, as to the core
PCT model. (BTW, I know you know most of this, but there
are potentially 99 of us involved in this conversation, at
one remove or another. What one person might not find
intriguing, another might...)

It may be interesting that there is such a localisation
of specific (classes of) perception to specific neurons,
but the popular extrapolations about how this explains
empathy seem to me unwarranted. What am I missing?

You're catching one of the main problems with these
reports, as far as I can tell. To claim that one is
explaining empathy is to run off enamored with possible
conclusions, before the phenomenon has even been nailed
down.

But let's go back to the first part of your sentence --

It may be interesting that there is such a localisation
of specific (classes of) perception to specific neurons,

This is of interest, I believe, from the standpoint of
the PCT model. In 1973, Bill Powers proposed a functional
model of how to apply the insights of cybernetics --
specifically, negative feedback control -- to psychological
phenomena. It was a retooling of Norbert Wiener's
work, but specified from the point of view of the control
system, not the external observer/designer/engineer.

An important part of Powers' model was its modularity.
It was built out of elemental control loops, with a
hierarchical proposal for how to link them together.
Subsequent working demos and simulations, by Powers
and others, confirmed that a hierarchical arrangement,
properly tuned, could produce stable control of multiple
variables simultaneously, without unstable oscillation.

This body of work constituted a _prediction_ about the
functional relationships that one might (must?) find,
in how living systems, specifically human, operate.
Powers attempted to be as parsimonious as possible with
his model -- as is true of all good models -- keeping to
the minimum number of concepts necessary, to try to
account for human functioning. And here I refer simply
to the operationalized components of the basic control
system: namely, two variables within the loop, perception
and output, two variables from outside the loop,
reference and disturbance, and a couple of parameters,
slowing factor and gain -- the latter of which could be
partitioned into various points around the loop, but
which for convenience is generally included in the
output equation. There was a further principle, that
if basic control systems were linked hierarchically,
then each higher level _had to_ operate with a slower
time constant.

That's it. That's the basic model. This was a functional
description, entirely specified as to its internal
relationships, which appeared to apply to a growing
range of perceptions-under-control, as later simulations
began to demonstrate.

What was not pre-specified was the specific implementation
of these functional components within the (human) nervous
system. Essentially it was a prediction, that _no matter
how_ a perception might actually be constructed, and _no
matter how_ a motor output might actually be constructed,
these functional relationships should hold! That was an
amazing prediction.

As a secondary elaboration of his basic model, Bill Powers
made some educated guesses about the classes of perception
that one might find humans operating with. This is the H
in Hierarchical PCT, with at present 11 qualitative types
of perception, seemingly arranged in some kind of layering.
This has never been an essential part of PCT. For myself,
I treat it as a "grounded qualitative theory," in the
technical sense of that term, with very useful heuristic
properties.

Nevertheless, these hierarchical layers of perception
were also _predictions_ -- not essential to the operation
of the model, but proferred nonetheless. Powers was
saying we should be able to find something in human
neurophysiology corresponding to classes such as these.

Enter, the "mirror neurons." In light of this second
tier of predictions made by Powers, the fact that these
researchers found (to use your words) "such a localisation
of specific (classes of) perceptions to specific neurons",
is indeed interesting. To call them "cells that read
minds" certainly seems unwarranted. But to call them
"perceptions caught in the act of perceiving" may be
another matter, of no small significance.

What surprised the original researchers, as far as I can
tell, was that "observing" and "acting" should utilize
the same sets of neurons. This seems to be because
observing and acting -- according to their (implicit)
model of how to categorize phenomena -- appear to be
such different behaviors. But a model that controls
perceptions by closing the loop between what is acted
upon and what is observed, does not have that same
problem.

What was anomaly to them, does not seem so to us on
CSGNet. This seems to be the import of your question --
"Is there less here than meets the eye?" We ask, "What
was the monkey perceiving that could have been the same
in either condition?" And we have put forward a few
candidates, such as the perception of "grasping" itself,
or that of a transition (and/or relationship) such as
"toward-the-mouth," or perhaps the event-perception
(or is it a program) of "eating."

The researchers observed an apparent regularity in the
firing of the (so-called) mirror neurons. To a PCT way
of thinking, we see stabilized regularities all the time
-- we either call them controlled perceptions, or the
reference standards to which controlled perceptions are
brought.

So the anomaly of this research is a little different
for those of us using the PCT model. We already know
how observing and acting can be implicated in a common
control loop. What is unclear to me is whether passive
observation is indeed control.

See, that's the interesting question that I think this
research suggests. It seems the mirror neurons were
already found to be predictably operative in certain
actions of the monkey (eating a peanut, or whatever.)
According to our PCT model (and slogan!) of such things,
all behavior is the control of perception. And even
the varying behavior that is part of controling a higher
level perception, is itself enacted as a control process,
akin to pursuit tracking.

So then, if the same neurons are firing in a (sufficiently)
same way with both active control and passive observation,
it seems that both are instances of control at a similar
level.

I can see how continuing to observe someone else eat a
peanut (or whatever) entails some form of control. There
is at least one stabilized variable apparent, which is
maintained despite disturbances -- the very act of
"continuing" with the observing. But as Bjorn Simonson
(2006.01.13,11:45 EUST) has pointed out, the _nature_ of
the observations are quite different when one is doing the
eating oneself. Tactile monitoring comes into play, and
orientation is vastly different, and the outcome leads
to tasting and actually eating. So, how could the control
be precisely at the same level? ...(so much so that the
locale of cell firing is apparently the same.)

Assuming we get past the question of which perception
at which level is jointly involved, I think a deeper
issue is raised by this matter of observation. Control,
by definition, means matching-to-template. When we
simply observe something, are we matching it to a
template? The notion of "recognition" seems to contain
at least an implicit sense of template in its etymology,
as a "re-cognizing."

Is there a form of observing that foregoes all templates?
There are certainly all kinds of background sensations
that we routinely disregard, and in that sense we are
not controling for them. But when we shift and attend to
them, aren't we then actively controling the perception
of keeping them in attentional focus. And don't we have
a colloquial term for this, which incidently uses a
purposive word? -- we call it "listening intently."

Was the monkey with the "mirror neurons" observing (the
peanut!) intently? Or was it just setting a reference
for wanting that peanut? -- the same sustained reference
it used when it ate peanuts itself. Maybe, in the case
of these mirror neurons, it is not so much perceptions,
but rather _a reference signal_ "caught in the very act
of wanting."

Mirror cells do not detect intentions. They enact them!

All the best,
Erling

NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the "reply" feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.
<<<<RCMH>>>>

[From Bryan Thalhammer (2006.01.15.1007 CST)]

Erling,

I liked both of your posts on this. The original note from Dick piqued my curiosity but unfortunately did not inspire my articulation of the topic in terms of PCT as well as it did yours. To me, and in the context of what I had researched in terms of observation and perceptual control, this topic is of great interest. I particularly relished this phrase:

"Maybe, in the case of these mirror neurons, it is not so much perceptions,
but rather _a reference signal_ 'caught in the very act of wanting.'"

Maybe the subject line here should be renamed: caught in the act of wanting...

Let me also link back to the Dick Robertson et al. paper "The Self as a Control System" as a major reference on this act of wanting topic.

--Bryan

[Erling Jorgensen (2005.01.15 0230 EST)]

Richard Kennaway (Fri, 13 Jan 2006 16:18:09 +0000)

Your comments lead me to some reflections, not so much
about the "mirror neurons" research, as to the core
PCT model...

...

Is there a form of observing that foregoes all templates?
There are certainly all kinds of background sensations
that we routinely disregard, and in that sense we are
not controling for them. But when we shift and attend to
them, aren't we then actively controling the perception
of keeping them in attentional focus. And don't we have
a colloquial term for this, which incidently uses a
purposive word? -- we call it "listening intently."

Was the monkey with the "mirror neurons" observing (the
peanut!) intently? Or was it just setting a reference
for wanting that peanut? -- the same sustained reference
it used when it ate peanuts itself. Maybe, in the case
of these mirror neurons, it is not so much perceptions,
but rather _a reference signal_ "caught in the very act
of wanting."

Mirror cells do not detect intentions. They enact them!

All the best,
Erling

[Martin Taylor 2006.01.15.11.49]

[From Erling Jorgensen (2005.01.15 0230 EST)]
>Richard Kennaway (Fri, 13 Jan 2006 16:18:09 +0000)

A very interesting posting! Just one (long) comment...

>It may be interesting that there is such a localisation

of specific (classes of) perception to specific neurons,

This is of interest, I believe, from the standpoint of
the PCT model....
...
Nevertheless, these hierarchical layers of perception
were also _predictions_ -- not essential to the operation
of the model, but proferred nonetheless. Powers was
saying we should be able to find something in human
neurophysiology corresponding to classes such as these.

Enter, the "mirror neurons." ...

What surprised the original researchers, as far as I can
tell, was that "observing" and "acting" should utilize
the same sets of neurons. This seems to be because
observing and acting -- according to their (implicit)
model of how to categorize phenomena -- appear to be
such different behaviors. But a model that controls
perceptions by closing the loop between what is acted
upon and what is observed, does not have that same
problem.

What was anomaly to them, does not seem so to us on
CSGNet....
So the anomaly of this research is a little different
for those of us using the PCT model. We already know
how observing and acting can be implicated in a common
control loop. What is unclear to me is whether passive
observation is indeed control.

I don't think it's unclear at all. It is not control, since the passive observer is not acting to influence the value of the perception.

According to our PCT model (and slogan!) of such things,
all behavior is the control of perception.

But nothing in PCT (in any commonly discussed variant) says "all perception is controlled by behaviour". A simple consideration of the available degrees of freedom for perceptual input and for motor (and, if you want, chemical) output says that almost all perceptions are NOT controlled at any given moment.

Even disregarding the dynamic bandwidths, and dealing only with the static degrees of freedom such as optic nerve fibres and auditory frequency channels, we have on the order of millions of input degrees of freedom, and only about a hundred output degrees of freedom. Add to that the dynamical consideration that the visual "channels" can vary with a bandwidth on the order of tens of Hz and the auditory ones rather faster, while the motor variables are limited to units, and you have another one or two orders of magnitude difference.

The conclusion must be that we control only about .001% of the perceptual signals that we could potentially be controlling. Of course, we fluently switch which perceptions we control, so the lack isn't obvious to introspection unless you look for it (imagine simultaneously controlling the movement of each leaf on the tree you can see rustling in the wind).

If we follow Bill P's model, the perceptions we control are those that (at least sometimes), either directly or through the side effects of controlling actions, influence the values of intrinsic variables. The rest can be passively observed with no loss.

So then, if the same neurons are firing in a (sufficiently)
same way with both active control and passive observation,
it seems that both are instances of control at a similar
level.

I don't think so. I think it's just a question of whether the particular perceptions represented by the firings are being controlled or not at that moment. The perceptual processes are likely not to change when a particular perception comes under control.

...There are certainly all kinds of background sensations
that we routinely disregard, and in that sense we are
not controling for them. But when we shift and attend to
them, aren't we then actively controling the perception
of keeping them in attentional focus.

I believe so.

Some of this is discussed in <http://www.mmtaylor.net/PCT/DFS93/DFS93_9.html&gt;

Martin

[From Rick Marken (2006.01.15.1050)]

Bryan Thalhammer (2006.01.14.1110 CST) --

So, as I wrote, the article might be an incentive to pick away at the question from a PCT modeling approach.

I treat physiological findings as a constraint rather that as information that can inform the functional modeling. And I think that most physiological findings put a pretty weak constraint on that modeling. I can't imagine what we might now find physiologically that would lead to a significant change in the basic functional organization of the PCT model. But I imagine that there might be something.

PCT was not built on the basis of physiological findings but it certainly is constrained by what we know about the structural and functional properties of the nervous system. For example, the basic control loop model is based on observation of behavior, not on observation of the nervous system. But the resulting model does not include features that are inconsistent with the physiology. Perceptual functions are consistent with what we know about receptive field research (like the "mirror cell" studies discussed); perceptual signals are consistent with what we know about the behavior of afferent neurons; the comparator function is consistent with what we know about neural synapses; output and reference signals are consistent with what we know about the architecture and functional characteristics of efferent neurons.

Physiological findings, like the "mirror cell" results, that are obtained outside the framework of the PCT model, are interesting, it seems to me, only to the extent that they seem consistent or inconsistent with the basic functional organization of the model. As Erling Jorgensen (2005.01.15 0230 EST) points out, the "mirror cell" findings are surprising only if one doesn't expect perceiving and behaving to utilize the same sets of neurons. But we know that behavior is the control of perception. So it's really no surprise that the neurons that perceive, say, eating a peanut would fire when one is eating a peanut (controlling the perception "peanut into mouth") and when one is watching another person eat a peanut (and passively perceiving "peanut into mouth"). This is really not more surprising than finding that the neuron that fires when you hear an E flat major chord played by Glenn Gould is the same neuron that fires when you hear an E flat major chord played by yourself.

My readings in cognitive psychology reveal that the field of "neurocognition" is pretty hot. The basic idea of neurocognition is that we can learn a lot about cognition but studying the brain activity that occurs while we carry out various cognitive activities. I don't see this. For example, I know a researcher at Ucla who used magnetic resonance techniques to study the brain activity of compulsive and non-compulsive gamblers. Of course, he finds that there is a difference; when compulsive gamblers gamble one area of the brain "lights up" more than the same area in non-compulsive gamblers. This is the kind of finding that make the Science column of the NY Times (like the mirror cell study). I don't know why people find this kind of thing informative. It seems to me that such a finding tells you very little more than you knew before you did the study, which is that the compulsive gamblers differ from non-compulsive gamblers. It doesn't seem to do much to explain compulsive gambling, unless you consider "area X in the brain is more active in compulsive than in non-compulsive gamblers" to be an explanation.

I just don't see how physiological results can inform our understanding of the functional organization of behavior. I can see that physiological studies that are properly conducted in the context of a functional model, like PCT, could produce results that show how the functional model might be implemented in the "wetware" of the nervous system. But, again, it seems to me that looking to research on the physiology of the nervous system for information about how behavior works strikes me as equivalent to looking at the chemistry of air molecules to understand the aerodynamics of flight.

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Erling Jorgensen (2006.01.15 1710 EST)]

[Martin Taylor 2006.01.15.11.49]

[Erling Jorgensen (2005.01.15 0230 EST)]

(Martin, either you are incredibly slow in responding,
or I don't know what year it is! ;-> )

So the anomaly of this research is a little different
for those of us using the PCT model. We already know
how observing and acting can be implicated in a common
control loop. What is unclear to me is whether passive
observation is indeed control.

I don't think it's unclear at all. It is not control,
since the passive observer is not acting to influence
the value of the perception.

Yeah, but I struggle with this. Isn't turning one's head
to listen controlling the clarity or intensity of a sound,
even when other aspects of the perception (an orchestral
piece, for instance) cannot be influenced? I guess at
that point the observation ceases to be simply passive,
and starts to become control.

More pointedly, however, how would we tell the difference
between passive (non-controlling) observation, and simply
not noticing? Wouldn't any test of the passively observed
perception shift it into a controlled variable? And isn't
the very process of recognizing a given perception an
instance of matching-to-template, and thus a form of
control? Sorry to be dense here, but I'm trying to figure
out where on this wonderful PCT map of tight tautological
relationships would we look, when it comes to locating
certain phenomena.

I think a related question is, what is the difference
between perceptions not currently being noticed, and
observation controlling for a zero reference for a given
perception? Maybe the best that can be said is that we
have no a priori way of telling the difference. We simply
test for the presence of a controlled variable (with a
non-zero value of the perception, for instance), and see
whether there is disturbance-resistance.

According to our PCT model (and slogan!) of such things,
all behavior is the control of perception.

But nothing in PCT (in any commonly discussed variant)
says "all perception is controlled by behaviour".

I agree that not all perception is controlled by behavior.
But I wrestle with understanding the point at which a
passively observed perception becomes controlled. I
wonder whether an attentional shift -- even with no other
external behavior signifying control -- is one way of
demarcating the boundary.

What internal behavior constitutes "focusing one's
attention"? Is this the alteration of a gain parameter?
Would that be an output gain or some other part of the
loop? Is this tuning or amplifying the gain on the
perceptual input side of the equation? Can we even do
that internally, or must it happen at the environmental-
feedback-function spot on the loop (akin to 'reaching
for a microscope')? I think we need to get some clear
hypotheses as to how certain (apparent) phenomena map onto
this PCT model, so that we can then propose falsifyability
experiments, in the spirit of strong-inference testing of
the model.

A simple consideration of the available degrees of
freedom for perceptual input and for motor (and, if
you want, chemical) output says that almost all
perceptions are NOT controlled at any given moment.

You raise some pretty powerful evidence in this part
of your post.

Even disregarding the dynamic bandwidths, and dealing
only with the static degrees of freedom such as optic
nerve fibres and auditory frequency channels, we have
on the order of millions of input degrees of freedom,
and only about a hundred output degrees of freedom.

Does this suggest that it is easier to construct a
perception than it is to construct an additional output?

Even as I say this, I realize that the "output" for most
of the model's hierarchically linked control systems is
a reference (an address?) for a perception at that level.
It is only at the lowest level, it seems, that perceptual
degrees of freedom give way to environmental degrees of
freedom in the outgoing channel. _Of course_ there are
far fewer genuine motor (and chemical?) output degrees
of freedom. Most of the outgoing series of references
are addresses for desired perceptions -- (or so says one
proposal for integrating memory into the PCT model) --
and thus they would seem to be piggy-backing onto the
multitude of perceptual degrees of freedom!

[As a matter of curiosity, I would be interested in
knowing how it was determined that we have "only about
a hundred output degrees of freedom". Is this the
number of muscles that we can independently move? And
when you add "(and, if you want, chemical)", does it
really remain such a broad gulf in df's? Does "chemical"
here refer only to the number or endocrine hormones we
produce? End of aside.]

Add to that the dynamical consideration that the visual
"channels" can vary with a bandwidth on the order of
tens of Hz and the auditory ones rather faster, while
the motor variables are limited to units, and you have
another one or two orders of magnitude difference.

The conclusion must be that we control only about .001%
of the perceptual signals that we could potentially be
controlling. Of course, we fluently switch which
perceptions we control, so the lack isn't obvious
to introspection unless you look for it (imagine
simultaneously controlling the movement of each leaf
on the tree you can see rustling in the wind).

Again, this is pretty persuasive evidence.

If we follow Bill P's model, the perceptions we control
are those that (at least sometimes), either directly or
through the side effects of controlling actions,
influence the values of intrinsic variables. The rest
can be passively observed with no loss.

I like your way of formulating this. It seems that
much of what crosses our perception passes by with no
preference expressed. And ultimately, preferences are
attached to whatever is needed for the intrinsic variables
to be under satisfactory control.

In mindfulness meditation, there is this notion of
noticing without grasping, neither pushing away nor
hanging on. Just noticing. In the kind of therapy I
do, such "acceptance skills" are often the hardest ones
to teach.

I still have a hold-out question, however, of whether
"passively observed" might involve a more active process
of adjusting one's references to match whatever one is
perceiving. _Subjectively_ that sometimes seems like
what is going on. It is almost as though tracking is
still taking place, but here with references doing the
pursuing.

There may be some flow-through mechanism that would not
complicate the model unduly. Still, not all appearances
are necessarily phenomena requiring a mechanism. Some
are merely epiphenomena, with an underlying basis quite
different from what appears on the surface. That is one
reason I like to stay close to the core PCT model, and
not be in a rush to multiply so-called explanatory
concepts and components.

Your comments are appreciated, as ever.
All the best,
Erling

NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the "reply" feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.
<<<<RCMH>>>>

[From Rick Marken (2006.01.15.1730)]

Erling Jorgensen (2006.01.15 1710 EST)

Martin Taylor (2006.01.15.11.49) --

I don't think it's unclear at all. It is not control,
since the passive observer is not acting to influence
the value of the perception.

Yeah, but I struggle with this...

Wouldn't any test of the passively observed perception shift it into a controlled variable?

I think the easiest way to see the difference between passive observation and control is to go back to the basics. Try running the basic tracking task at Nature of Control. First press "Run" and watch the cursor move back and forth. The position of the cursor is a perceptual variable that you are observing passively. An observer of your behavior could test to see if you are, indeed, passively observing the cursor by simply asking you what the cursor is doing. If you say "It's moving back and forth" then the observer can be pretty sure that you are, indeed, passively observing the cursor. The fact that you are not controlling the cursor would also be evident to an observer at the end of a trial run, where they would see that the stability measure is 1, meaning that the expected and observed variances in cursor position are equal; a stability value of 1 means that the cursor is not being controlled at all. Now press "Run" again and this time try to keep the cursor on target using the mouse. You are still perceiving cursor position, but now you are also acting to control this perception. The fact that you are now controlling cursor position would, again, be evident to an observer at the end of the trial run because the stability value will be much greater than 1 (say 8 or more; it's up near 10 for me).

The simple tracking task shows clearly (at least to the person doing it) that passive observation and control are two distinct modes of behavior. Going from passive observation to control mode is like flipping a switch. The two modes don't shade into one another. You can sit back and passively observe changes in cursor position or you can flip the mental switch, grab the mouse and control cursor position. What it is that flips the switch is not specified in B:CP. Maybe it's the work of some higher order system(s); maybe it's consciousness. But the two different modes of behavior -- passive observation and control -- are quite easy to distinguish, subjectively (when you do the switching yourself) and behaviorally (when you watch another person doing it).

And, of course, the same perceptual function is almost certainly involved in perceiving cursor movement in both passive observation and control mode. The neuron that carries the perceptual signal that is the output of this perceptual function is, I believe, equivalent to what the "mind reading" cell researchers call "mirror cells".

Best

Rick

Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bjorn Simonsen (2006.01.16,10:30EUST)]

From Rick Marken (2006.01.15.1730)
I think the easiest way to see the difference between passive
observation and control is to go back to the basics. Try running the
basic tracking task at
Nature of Control.

I tried to "Run" and watched the cursor move back and forth. I said loud
"It's moving back and forth" to make it possible for the observer to be
pretty sure that I passively observed the cursor.
I tried to "Run" and I kept the cursor on target using the mouse.
I tried to "Run" and I thought inside myself: "This is funny, maybe also I
shall learn to program in Java"
I tried to "Run" and watched the cursor move back and forth. I said _silent
to myself_ "It's moving back and forth".

I did the third and fourth "Run" to show that it _isn't_ easy to tell if
living organisms are imagining or passively observing the cursor (showing no
actions).

You are the PCT teacher in our relationship, but this Demo shows the
difference between passive
observation and control. You have omitted in your demo when people do as I
did in my third and fourth "Run". You emphasize that the Demo presupposes
that RMS Error can be used to measure control when the reference value of
the controlled variable is known.
I think we must allow for my third and fourth "Run" when we study the monkey
watching the student eating.

I controlled the three equations and either am I doing something wrong, or
you have made a mistake. The three equations if, I _not_ use the mouse,
show:
1. C - M = 0
2. M - D = 0
3. C - D = 1
If I work these three equations out, I find:
1. C = M
2. M = D
1. and 2. in 3.
3. M -M = 1
4. 0 = 1
And that is wrong

The three equations, if I use the mouse, show:
Error = 36.2 and Stability 2.6
1. C - M = 0.098
2. M - D = -0.858
3. C - D = 0.425
If I work these three equations out, I find:
1. C = M + 0.098
2. D = M + 0.858
1. and 2. in 3.
2. M + 0.098 - (M + 0.858) = 0.425
M + 0.098 - M - 0.858 = 0.425
0.760 = 0.425
And that is wrong.

Am I wrong?

But the two different modes of behavior --
passive observation and control -- are quite
easy to distinguish, subjectively (when you
do the switching yourself) and behaviorally
(when you watch another person doing it).

Yes, I agree. But there are other modes of behavior. One of them is not
passive observation, but thinking upon something quite different.

And, of course, the same perceptual function
is almost certainly involved in perceiving
cursor movement in both passive observation
and control mode.

Yes, I agree. But what if the observer is neither controlling nor exercising
passive observation?

bjorn

[From Bjorn Simonsen (2006.01.16,14:30EUST)]

Martin Taylor 2006.01.15.11.49

From Erling Jorgensen (2005.01.15 0230 EST)

A very interesting posting!

Yes. Reading texts is often the same as listening to a concert. You have
heard it before, but some orchestras make the experience more wonderful.
Your text, Erling Jorgensen was a grate experience. Thank you.

control loop. What is unclear to me is whether passive
observation is indeed control.

I don't think it's unclear at all. It is not control, since the
passive observer is not acting to influence the value of the
perception.

May I ask if something that looks like passive observation can be control in
imagination mode?
Let us think the watching monkey plaid a game himself. He was satisfied and
had no wishes for ice cream. Let us think he watched to see if the student
put his tongue out before he tasted the ice cream.
The monkey did not show any actions. He was in imagination mode and asked
himself if the student did put out his tongue when he lifted his arm the
first time, the second time, the third time and so on.
Let us say the monkey perceived the tongue out/the tongue not out, a
configuration perception. Let us also say that the monkey perceived the
student's lifting hand to the mouth, a transition perception (maybe event
perception). Let us to conclude say the monkey perceived the result, tongue
out or tongue in, a relationship perception (either, or).

Is it possible to say that the monkey controlled his imagined perceptions at
a relationship level?
If he did, the monkey did not act to influence the value of perception, but
he said to himself: "This time I was right" or "this time I was wrong". In
this case there are no actions (I think), but there are disturbances (the
student eating ice cream) and there are perceptual signals at a number of
levels. There are perceptions.
This long story is a test from me to find out if something that may look
like passive observation, because there is no acting that influence the
perception, is a control in imagination mode.

If we follow Bill P's model, the perceptions we control are those
that (at least sometimes), either directly or through the side
effects of controlling actions, influence the values of intrinsic
variables. The rest can be passively observed with no loss.

I thought Bill's model thought us that if we control a perception and don't
achieve the error to be zero (or near by), the reorganizing system senses
the state of physical quantities intrinsic to the organism and controls
those quantities with respect to genetically a given reference level.
Is it those intrinsic variables you think upon?
If that is correct, you say that the perceptions I control when I brush my
teeth are passively observed perceptions. I don't usually reorganize when I
brush my teeth.
bjorn

[Martin Taylor 2006.01.16.09.35]

[From Bjorn Simonsen (2006.01.16,14:30EUST)]
>Martin Taylor 2006.01.15.11.49

May I ask if something that looks like passive observation can be control in
imagination mode?

Certainly, but that's internal, and never observable by an external experimenter without using neuraphysiological techniques. "Passive" in "passive observation" refers to the fact that one is not acting on the thing being observed.

The degrees of freedom argument for the existence of passive observation applies only where there's a bottleneck restricting the number of simultaneous control loops at that point. The bottleneck we can identify (to answer Erling's question) is the number of independent joint and muscle movements we can do that can affect something outside the skin.

It doesn't matter what is going on inside the skin, the number of degrees available for control loops that pass through the skin is limited by the number available at the most restricted point. I have no idea where the actual restriction lies (there could be a more restrictive place in the internal circuitry), but I do know that most of us have only five fingers, each with four joints (including the one near the wrist) that have a total of about five degrees of freedom (one for each joint except the knuckle), and likewise for toes. We have a few degrees of freedom for spinal bends and twists, arms and leg movements, and a few facial muscles. The total is in the 100-200 range.

When you come to the real issue, which is degrees of freedom per second, matters are worse, because most people can move fingers only with about two degrees for bend and one for lateral position and can do it no faster than 5 or 10 Hz (to be generous!), can hardly move toes at all, are very slow (around 1 Hz) for spinal movements and most facial movements, and so on. But the sensory systems on which perceptions of the outer world are based are much, much, faster, as well as having far more capacity for independent variation.

None of these arguments apply at all for feedback loops that happen entirely within the skin. In particular, they don't limit how much control a person might be doing in imagination when passively observing something external.

>If we follow Bill P's model, the perceptions we control are those

that (at least sometimes), either directly or through the side
effects of controlling actions, influence the values of intrinsic
variables. The rest can be passively observed with no loss.

I thought Bill's model thought us that if we control a perception and don't
achieve the error to be zero (or near by), the reorganizing system senses
the state of physical quantities intrinsic to the organism and controls
those quantities with respect to genetically a given reference level.
Is it those intrinsic variables you think upon?

Yes, it's those intrinsic variables, but I wasn't talking about reorganization at all. I was talking about the mechanisms that affect the values of the intrinisic variables. Whether some measure of error in the overt perceptual control hierarchy is one of the intrinsic variables is a matter in what I call "Speculative PCT" (the list being "Basic" or "Necessarily true" PCT, "Conventionally assumed" PCT which includes HPCT, and "Speculative" PCT, which includes adjuncts that seem reasonable but that have no experimental or introspective support).

My understanding of the reorganizing system is that it doesn't directly _control_ the intrinsic variables as your wording would suggest. The control is a kind of "winter leaf" effect, in which the reorganization changes the landscape of perceptual control randomly until the actions of the perceptual control system in its current environment have effects that bring the intrinsic variables ("winter leaves") into a safe range near their genetically determine reference values.

Intrinsic variables are those that are required for life, such as the chemical balances in the blood. Even if restructuring (reorganization) of the perceptual control system does work the way Bill says, and if the correct organization is HPCT, it's anybody's guess just what the intrinsic variables are. We would probably all agree on a few of them, like blood sugar level, but after that, I don't think there would be much agreement. One might make a good case for the argument that peristent, and particularly rising, error in currently controlled perceptions might relate to one or more intrinsic variables, but how would you prove it, when the very structure you are assuming to be reorganizing is itself ill-defined?

If that is correct, you say that the perceptions I control when I brush my
teeth are passively observed perceptions.

The perceptions you control are, by definition, not passively observed.

I don't usually reorganize when I
brush my teeth.

I don't see the connection. Does brushing your teeth cause some intrinsic variable to deviate from its reference condition?

Martin

[From Rick Marken (2006.01.16.0820)]

Bjorn Simonsen (2006.01.16,10:30EUST)--

I controlled the three equations and either am I doing something wrong, or
you have made a mistake. The three equations if, I _not_ use the mouse,
show:
1. C - M = 0
2. M - D = 0
3. C - D = 1

Those are not equations that you se there in the display when you complete a run (though I admit they look like equations so perhaps I should change my notation). C - M is just a label meaning the correlation between cursor (C) and (-) mouse (M) variations. The "= 0" means that the correlation between C and M is 0, which it should be in this case since the mouse doesn't move at all while the cursor moves with the disturbance.

Yes, I agree. But there are other modes of behavior. One of them is not
passive observation, but thinking upon something quite different.

Of course. But Erling was just talking about passive observation vs control and that's really all that seems directly relevant to the "mirror cells" work.

And, of course, the same perceptual function
is almost certainly involved in perceiving
cursor movement in both passive observation
and control mode.

Yes, I agree. But what if the observer is neither controlling nor exercising
passive observation?

My expectation, based on PCT, would be that the same perceptual function would again be involved. That is, if you imagine the cursor moving back and forth, someone with an electrode in the appropriate occipital lobe neuron in your brain would, I believe, detect the same level of firing that occurs in that neuron when you are passively watching or controlling this perceptual variable (changes in cursor position).

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Erling Jorgensen (2006.01.16 1415 EST)]

Rick Marken (2006.01.15.1730)]

Erling Jorgensen (2006.01.15 1710 EST)

Hi Rick,
You're right, it is helpful to go back to the demos, and
test the word-formulations against the mathematically-
specified simulations of the phenomena.

Wouldn't any test of the passively observed perception
shift it into a controlled variable?

I think the easiest way to see the difference between
passive observation and control is to go back to the
basics. Try running the basic tracking task at ...

There was this fad of a TV show a while back, called "The
Weakest Link." I have this image of you reading my post,
coming upon the above question, and (rightfully) declaring
"You ARE the Weakest Link. Good Bye!"

That leaves me with the comfort that other parts of my
post may have been "less weak links," since people usually
reply to the portions that create disturbances & thus
errors to something they believe/perceive. ('Course,
"less weak than plainly wrong" could still be a long
distance from "unreserved validation"... ;-> )

First press "Run" and watch the cursor move back and
forth. The position of the cursor is a perceptual
variable that you are observing passively.

It was helpful to realize (again) that numerous perceptual
variables were simultaneously available, some of which
may have been controlled, while others -- like the
"position" perception -- were not. On one run, I tried
controlling the "intensity" perception of the cursor (by
dimming the monitor), but for most of that time the
"position" perception was still observable, though not
controlled.

I also tried on one run to focus on my "arrow-point" icon
switching to the little "hand-selector" icon, as I moved
the cursor over your links at the bottom. But because
I knew I wanted to try to notice the supposed attentional
shift back to noticing the position of the cursor, it
never really left my awareness.

On one run I did control for a zero-perception of the
cursor postion, by holding an envelope across the screen.
And obviously, the structure of the task did not allow
for differentiating that form of position-perception
control, because the correlation & stability factor
values came out the same way as for passive observation.

So, technically, that did help to define one limit
condition of your elegant simulation, that as it currently
stood, it could not differentiate passive observation
from controlling for a zero value of the perception. To
that limited extent, that precise model was "falsified"
as not covering zero-perception control. Presumably,
there might be modifications to the model -- like breaking
the link to perception & making it open-loop perhaps (?)
-- which might make the model approximate the (drastic)
implementation I imposed.

Continuing on as a cantankerous test subject... I tried
two different ways of controlling for keeping the position
of the cursor always to the right of the target. Since I
knew the disturbance function was a sine wave that began
to disturb the cursor to the right, one way to control
for "cursor-always-to-right" was to restart the demo each
time the disturbed cursor approached the center target.
(Predictably enough, this left me with no compiled data
to assess correlations & stability factor!...)

The other way, within the confines of the experimental
situation, was to ignore the cursor when far to the right
of the target, & to conteract disturbances that threatened
to move it past center to the left. My computer is quite
slow, as is my modem connection, so there seemed to be a
perceptual delay in my seeing the results of my actions
-- which admittedly were only actively counteracting the
disturbance half the time, because during the other half
my reference meant the cursor was not "disturbed" -- but
in any event, the measures at the end of the run were
confusing >> with seemingly a negative correlation of
greater than negative one (!?) between cursor & mouse, &
a zero correlation, instead of an inverse one, between
mouse & disturbance. Actually, this latter result makes
sense, because during the "non-disturbing" movements of
the cursor, I was trying to buy space for mouse movements
I would need later without running off the page. So
essentially I was cancelling out any correlations, & the
net mouse-to-disturbance correlation was indeed zero.

Anyway, I thought you might be interested in these
non-normative results. And I'd strongly recommend that
you stick with better (or more compliant) test subjects
than me!

All the best,
Erling

NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the "reply" feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.
<<<<RCMH>>>>

[From Bryan Thalhammer (2006.01.16.1655 CST)]

Without a doubt, the physiological findings (neurocognition hotness regardless) cannot inform a PCT model. Which I think means that I agree with you and with Dag that parts of the brain lighting up don't mean anything at all.

"area X in the brain is more active in Type 1 S than in Type 2 S is not an explanation, only an observation." I agree, and not much of an explanation. But if that same difference (whatever it is) shows up time and again, then a NEW experiment should be designed that explains that phenom.

So the "Question" I am hankering to be picked at, has indeed been touched by everyone so far in this strand (my goal), regardless of the point of departure. The deal for me is when a person/monkey observes or acts, are there aspects of that control in the observer and actor that are based in the same (level of?) control systems that we propose. What would a model be of the event observing that shares aspects of perceptual control with the event of acting? Also, how can one ensure that the object of the observation/acting is contrained?

An aside as I breeze past some of the responses is that monkeys are not particually stupid creatures. This weekend I was at the Lincoln Park Zoo. I saw animals such as cats, primates, savannah animals, and some parrot types. What I saw was animals looking frantic, bored, and not too natural, and in some cases the keepers get smacked when the animal thinks it has good chance, so... I wonder if the monkeys in the study had already given up trying to control for much of anything, and were they really behaving quite as naturally as they would as if they saw another monkey in a natural setting eating something they wanted. I mean they might be controlling for hate of the experimenter, too, and all that activation could be connected with an image of the experimenter impaled on something sharp. :wink:

--Bryan

[Rick Marken (2006.01.15.1050)]

Bryan Thalhammer (2006.01.14.1110 CST) --

So, as I wrote, the article might be an incentive to pick away at the
question from a PCT modeling approach.

I treat physiological findings as a constraint rather that as
information that can inform the functional modeling. And I think that
most physiological findings put a pretty weak constraint on that
modeling. I can't imagine what we might now find physiologically that
would lead to a significant change in the basic functional organization
of the PCT model. But I imagine that there might be something.

...

I just don't see how physiological results can inform our understanding
of the functional organization of behavior. I can see that
physiological studies that are properly conducted in the context of a
functional model, like PCT, could produce results that show how the
functional model might be implemented in the "wetware" of the nervous
system. But, again, it seems to me that looking to research on the
physiology of the nervous system for information about how behavior
works strikes me as equivalent to looking at the chemistry of air
molecules to understand the aerodynamics of flight.

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[Martin Taylor 2006.01.17.00.28]

[From Erling Jorgensen (2006.01.16 1800 EST)]

Martin Taylor 2006.01.16.16.40

A "zero value" for the perception of the position of
the cursor is a very precise position perception. It
says that the cursor is at whatever location you have

>specified to be represented by zero.

Aren't there some situations where we control for an
absolute zero reference, rather than just an arbitrarily
defined zero value?

Sure. Why not? When you are controling for intensity, in particular, it's easy to control for absolutely none of whatever it is -- no sound, no light. It may not, however, be easy to attain zero error, with absolutely zero as your reference value :wink:

In your example of the person's tie in Upper Hertsworth,

(if such a place exists!)

I wasn't controlling for not seeing it, I was not in a
postion to see it -- (I don't get around to Upper
Hertsworth much.) The color, for me, was indeterminate,
but so was my reference. It was not a situation that
mattered to me.

I don't think I was referring to contolling or not controlling. I was trying to illustrate the notion of possible perceptions that at the moment have an imprecise relation with the external variable that they correspond to. Who knows, the very next moment you might have met the person from Upper Hertsworth at your door, and then the perceived colour of the tie would have gained a more precise relationship with the external (assumed) reality.

There's a future thread here, on the topic of control and the precision of the relation between the perception and the external situation represented in the perception. Not for today, though.

I am wondering if there a difference between actively or
deliberately ignoring a given perception -- a situation
of intent & thus control -- versus not having noticed (or
learned to notice) that perception? Would we model these
two situations differently?

I think it has implications for modeling learning.

I think so, too. One of the key elements of learning is the building of suitable perceptual input functions -- the ability to perceive what discriminates "A" from "B" in the domain you are trying to learn.

  To me,
it seems that an agreed learning environment is one where
one is taught references for how to bring perceptions into
being that may not have been there before.

I'm not sure of the implication of your wording. I'd agree if you would accept a slight rewording: "An agreed learning environment is one in which one is enabled to develop new perceptions that may not have been there before, and having those perceptions to be able to control them at varying reference levels." I would anticipate, however, that learning the perceptions and learning to control them are likely to be learned simultaneously in most cases.

  A talented
teacher often conveys a sense of their own passion, that
these things matter and are worth caring about. And I
believe it may include, not so much learning answers, as
learning to ask relevant questions and care about the
answers.

You are getting into social PCT, which is a topic that interests me. But I don't think this is the time to go into what could be a long discussion. I will just say that there's a whiff of "mirroring" in the suggestion that the teacher's passion may induce a similar passion in the student. Whether the smell is good or bad remains to be determined.

These exchanges are helpful. Thank you to you & Rick.

Likewise. The feeling is mutual.

Martin

[From Rick Marken (2006.01.17.0800)]

Bryan Thalhammer (2006.01.16.1655 CST)

The deal for me is when a person/monkey observes or acts, are there aspects of that control in the observer and actor that are based in the same (level of?) control systems that we propose.

It seems to me that this is unquestionably the case. When you see someone controlling a cursor, for example, you must be perceiving the same variable (cursor position) as the controller, or you couldn't tell that the variable might be under control.

Erling Jorgensen (2006.01.16 1800 EST)

So it appears that Rick's elegant simulation, as it stands, can indeed model the condition of a zero-value of the position-perception.

It seems to me that this discussion is getting into things that are kind of tangential to the passive observation vs control issue. I'll just say "thanks" for saying that the simulation is "elegant" but, assuming that you're talking about the tracking task I suggested as a way to see the difference between passive observation and control, it's not really a simulation; it's an actual control task. There is really no simulation running in that task at all; you (the actual control system) are doing the controlling (when you are controlling) and the passive observing (when you are passively observing). As far as zero value of the perception, there is presumably zero value of the cursor position perception when you are not looking at the cursor (or covering it up with something) though what that has to do with the difference between control and passive observation of a variable is beyond me.

Best regards

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Rick Marken (2006.01.18.0830)]

Martin Taylor (2006.01.18.09.51) --

I'm pretty busy today so I probably won't have time to give a complete reply until later this evening. I'll just quickly reply to one of your points.

Rick Marken (2006.01.17.0800)--

I don't see that. I was thinking of the perceptual signal, which is presumably
represented as rate of neural firing. So when the rate of neural firing is
virtually zero impulses per second (there is always _some_ firing) there is
zero perception.

Don't think of intensity perception, think of position, and think of
your demo.

I was. Remember, in PCT the value of a perceptual _variable_, be it intensity or principle, is represented by the rate of afferent neural firing. So position of a line is represented in this way. Actually, in the tracking task what is probably represented by neural firing rate the position of the line (cursor) relative to the target. What we call the zero value of this variable -- where the measured distance between cursor and target is 0 -- is not necessarily zero neural firing because neurons can't fire negatively. So non-zero deviations of cursor in different directions from target are probably represented by different _positive_ neural firing rates, lower rates for deviations to the left and higher rates for deviations to the right, say. The reference deviation of zero would then be represented by some intermediate rate of neural firing. Using this scheme for representing deviation from target leaves 0 neural firing to represent no perception of the deviation of cursor f!
rom target; the case where cursor and target are covered up or removed from the screen.

Best

Rick

Where on the screen is zero position? If I want to control

for the cursor being at position zero, and it is in the middle of the
screen, what do I do? If I want to control for the cursor being at
the middle of the screen and I can't see it, what do I do?

Those two conditions are different, aren't they? In the first case, I
may have defined "zero position" as the middle of the screen and
there is no error, or I may have defined "zero position" as the left
edge, and I act to move the cursor leftwards, reducing the error until
it gets to the edge. On the other hand, if I can't see the cursor, I
can't act reliably to move to a position of zero error. I have no
perception of it, not a perception that is equal to zero.

Which brings me to my proposal for an amnedment to your demo.

Instead of having the mouse control the screen display directly, have
it control through a hidden (2-D) variable I'll call {P,Q}, and let
the screen {x,y} be {P+a,Q+b}, where a and b are variables. Give the
experimenter/subject two sliders for each of a and b, one of which
determines the standard deviation of the magnitude of a and b, the
other of which controls the bandwidth of the variation (which must be
slower than variation in P and Q. The subject's task now is to
control {P,Q}, not the screen position. Let a and b be zero for some
time at the start of a run before adding in the variation.

The idea of this is to simulate Erling's covering the screen with an
envelope, but also having intermediate conditions rather like having
a translucent envelope that gives you a fuzzy idea of where the one
you are tracking (and the cursor) are at any moment. The subject
won't lose control immediately, and the demo should be able to track
how long it takes before it happens.

An alternative form, which is probably rather harder to program, is
to fuzz the display of the cursor and/or the potential targets.

Yet another alternative, which might be even better for the purpose
of demonstration, is to define a precise area on the screen that acts
like a small envelope. In this area, the screen contrast of the
cursor and targets is diminished (the area could be grey, ranging
from white to black according to a slider setting). At zero contrast,
it's just like the envelope, and at low but non-zero contrast, it's
like a translucent envelope. As the dot being tracked (and the
cursor) passes in and then out of the obscured area, one could
determine the time course of both the loss and the regaining of
control.

I think some such variant of your demo might both illustrate the
difference between a zero valued perception and a non-existent
perception, and at the same time allow for some interesting
observations about losing and regaining control when the data are
obscured and the tracker must rely on control through imagination
during an interim period (see the thread of a few weeks ago).

Martin

Richard S. Marken, PhD
Psychology
Loyola Marymount University
Office: 310 338-1768
Cell: 310 729 - 1400

`[From: Paule A. Steichen Asch (2006.01.18.11:55)

very humbly..
I am doing internet (re)search on the brain in another context.
Sorry if I mispeak myself and do not acknowledge all writings but...

how the brain processes info top-down and bottom-up is a complex paradigm
involving more parameters that my brain can encompass.

I am fascinated by the issue of (projection) of intention.

In monkeys the grasping movement in a monkey does evoke a response in a
monkey-witness brain, for ex (earlier discussions here).

We can only have a guess on another's intention?

paule

···

----- Original Message -----
From: "Richard Marken" <rmarken@EARTHLINK.NET>
To: <CSGNET@LISTSERV.UIUC.EDU>
Sent: Wednesday, January 18, 2006 11:40 AM
Subject: Re: NYTimes.com: Cells That Read Minds

[From Rick Marken (2006.01.18.0830)]

>Martin Taylor (2006.01.18.09.51) --

I'm pretty busy today so I probably won't have time to give a complete

reply until later this evening. I'll just quickly reply to one of your
points.

>> Rick Marken (2006.01.17.0800)--

>>I don't see that. I was thinking of the perceptual signal, which is

presumably

>>represented as rate of neural firing. So when the rate of neural firing

is

>>virtually zero impulses per second (there is always _some_ firing) there

is

>>zero perception.
>
>Don't think of intensity perception, think of position, and think of
>your demo.

I was. Remember, in PCT the value of a perceptual _variable_, be it

intensity or principle, is represented by the rate of afferent neural
firing. So position of a line is represented in this way. Actually, in the
tracking task what is probably represented by neural firing rate the
position of the line (cursor) relative to the target. What we call the zero
value of this variable -- where the measured distance between cursor and
target is 0 -- is not necessarily zero neural firing because neurons can't
fire negatively. So non-zero deviations of cursor in different directions
from target are probably represented by different _positive_ neural firing
rates, lower rates for deviations to the left and higher rates for
deviations to the right, say. The reference deviation of zero would then be
represented by some intermediate rate of neural firing. Using this scheme
for representing deviation from target leaves 0 neural firing to represent
no perception of the deviation of cursor f!

rom target; the case where cursor and target are covered up or removed

from the screen.

Best

Rick

Where on the screen is zero position? If I want to control
>for the cursor being at position zero, and it is in the middle of the
>screen, what do I do? If I want to control for the cursor being at
>the middle of the screen and I can't see it, what do I do?
>
>Those two conditions are different, aren't they? In the first case, I
>may have defined "zero position" as the middle of the screen and
>there is no error, or I may have defined "zero position" as the left
>edge, and I act to move the cursor leftwards, reducing the error until
>it gets to the edge. On the other hand, if I can't see the cursor, I
>can't act reliably to move to a position of zero error. I have no
>perception of it, not a perception that is equal to zero.
>
>Which brings me to my proposal for an amnedment to your demo.
>
>Instead of having the mouse control the screen display directly, have
>it control through a hidden (2-D) variable I'll call {P,Q}, and let
>the screen {x,y} be {P+a,Q+b}, where a and b are variables. Give the
>experimenter/subject two sliders for each of a and b, one of which
>determines the standard deviation of the magnitude of a and b, the
>other of which controls the bandwidth of the variation (which must be
>slower than variation in P and Q. The subject's task now is to
>control {P,Q}, not the screen position. Let a and b be zero for some
>time at the start of a run before adding in the variation.
>
>The idea of this is to simulate Erling's covering the screen with an
>envelope, but also having intermediate conditions rather like having
>a translucent envelope that gives you a fuzzy idea of where the one
>you are tracking (and the cursor) are at any moment. The subject
>won't lose control immediately, and the demo should be able to track
>how long it takes before it happens.
>
>An alternative form, which is probably rather harder to program, is
>to fuzz the display of the cursor and/or the potential targets.
>
>Yet another alternative, which might be even better for the purpose
>of demonstration, is to define a precise area on the screen that acts
>like a small envelope. In this area, the screen contrast of the
>cursor and targets is diminished (the area could be grey, ranging
>from white to black according to a slider setting). At zero contrast,
>it's just like the envelope, and at low but non-zero contrast, it's
>like a translucent envelope. As the dot being tracked (and the
>cursor) passes in and then out of the obscured area, one could
>determine the time course of both the loss and the regaining of
>control.
>
>I think some such variant of your demo might both illustrate the
>difference between a zero valued perception and a non-existent
>perception, and at the same time allow for some interesting
>observations about losing and regaining control when the data are
>obscured and the tracker must rely on control through imagination
>during an interim period (see the thread of a few weeks ago).
>
>Martin

Richard S. Marken, PhD
Psychology
Loyola Marymount University
Office: 310 338-1768
Cell: 310 729 - 1400

[From Rick Marken (2006.01.18.1930)]

Rick Marken (2006.01.18.0830)

Martin Taylor (2006.01.18.09.51) --

I'm pretty busy today so I probably won't have time to give a complete reply until later this evening. I'll just quickly reply to one of your points.

Rick Marken (2006.01.17.0800)--

I don't see that. I was thinking of the perceptual signal, which is presumably
represented as rate of neural firing. So when the rate of neural firing is
virtually zero impulses per second (there is always _some_ firing) there is
zero perception.

Don't think of intensity perception, think of position, and think of
your demo.

I was. Remember, in PCT the value of a perceptual _variable_, be it intensity or principle, is represented by the rate of afferent neural firing.

Well, now that I have time I see that I don't really have much to add to what I said when I was rushed. So it's your turn;-)

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400