Testing for Imagination in Experiments (was Changing the foundations of PCT)

[Martin Taylor 2010.08.28.23.43]

[From Rick Marken (2010.08.25.2045)]
So we have to develop an
experiment -- a variant of the yes/no detection task -- where these
models make different predictions. I have an idea of the kinds of
experiment that will distinguish between the models but I was hoping
to get some agreement about the models (in the form of programs)
before I proposed it. I'll wait.

I don't suppose you would care to comment on the third experiment I suggested to distinguish between the models, would you? It seems simple enough, and although no test can prove any model to be correct, at least if it turns out that people can push button "3" at the same time as saying "three", I think it would eliminate your model.

I did ask for comment when I first proposed it [Martin Taylor 2010.08.11.14.33], and again a few days later [Martin Taylor 2010.08.24.11.04] . So far, nothing.

And yes, I have MS Office, so I imagine I could run your spreadsheet.

Martin

[From Bill Powers (2010.078.28.2200 MDTE)]

Martin Taylor 2010.08.28. 23.37 --

[From Bill Powers (2010.08.14.1030 MDT)]

Martin Taylor 2010.08.24.11.04]

(re: Rick Marken (2010.08.23.1900))

MMT: Do you have an open-loop model to serve as a check against your model? Personally, I find it hard to think of how an open-loop model might look.

Here's one:

             other inputs
                  >
                  v
Stimulus -->[some function] --> ref signal --> Control system --> response

I think this represents your diagram. "Some function" may contain internal feedback loops but there is no feedback from the observed response to "some function" or to the observed "stimulus". Overall, this is an S-R system.

I don't see much relationship between this diagram and my model. Could you explain where in it the loop representing the dialogue between subject and experimenter is shown?

What dialog is that? After giving the instructions, does the experimenter tell or otherwise indicate to the subject whether each response was correct? Does the subject have any way of knowing, during the experiment, whether the responses fulfill the requirements of the task? Does the experimenter do anything different that the subject can see that depends on whether the response is right or wrong?

The "some function" part indicates where the relationship is being perceived and controlled in imagination, eventually leading to setting the reference signal for the control systems that actually produces the overt response. "Other inputs" are the reference signal for the relationship controller, and perhaps over a longer time-scale and at higher levels, some verbal interactions in which the experimenter requests cooperation and gives instructions. The experimenter perceives the actual response, but as far as I know this doesn't lead the experimenter to do anything different to the stimulus whether the response is right or wrong. There's no present-time feedback thrugh the environment other than the lower-order loops inside what is called "control system" in my diagram. Perhaps there should be a feedback arrow from "response" to "control system." But there is none going to "some function."

Am I right, by the way, in assuming that in your model the search for a suitable answer occurs on every trial, meaning there is no learning?

Best,

Bill P.

···

On 2010/08/24 12:36 PM, Bill Powers wrote:

[From Rick Marken (2010.08.28.2230)]

Martin Taylor (2010.08.28.23.43)--

[From Rick Marken (2010.08.25.2045)]
So we have to develop an
experiment -- a variant of the yes/no detection task -- where these
models make different predictions. I have an idea of the kinds of
experiment that will distinguish between the models but I was hoping
to get some agreement about the models (in the form of programs)
before I proposed it. I'll wait.

I don't suppose you would care to comment on the third experiment I
suggested to distinguish between the models, would you? It seems simple
enough, and although no test can prove any model to be correct, at least if
it turns out that people can push button "3" at the same time as saying
"three", I think it would eliminate your model.

The reason I don't comment on your experiment is because I see no
relationship between it and the models described in the diagrams we
agreed on. I'm attaching what I think was the latest version of those
diagrams. I don't see anything in the diagrams of our models that says
anything about what a person would do in a detection experiment where
they are required to respond by both pushing a button and saying a
word simultaneously. But if you think the models predict different
behavior in your proposed experiment, please demonstrate this using
working versions of the models.

As I said before, however, I think the correct way to proceed with
this effort is to build working versions of the models that account
for the behavior in a simple experiment. I'm doing that for the yes/no
detection experiment. We proposed our models as an explanation of the
behavior in such an experiment; I have implemented both and, indeed,
they produce behavior like that seen in a yes/no detection task. You
will see this once I distribute the spreadsheet. Now the task is to
design an variation of the yes/no detection task that will distinguish
between the models. We should be able to do that using the spreadsheet
and seeing which manipulations (disturbances to observable variables)
result in distinctly different behavior in the two models.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Content-Type: image/gif; name="MartinsVersion.gif"
Content-Disposition: attachment; filename="MartinsVersion.gif"
X-Attachment-Id: f_gdfgijjo0

[Martin Taylor 2010.08.29.10.42]

[From Rick Marken (2010.08.28.2230)]
Martin Taylor (2010.08.28.23.43)--
[From Rick Marken (2010.08.25.2045)]
So we have to develop an
experiment -- a variant of the yes/no detection task -- where these
models make different predictions. I have an idea of the kinds of
experiment that will distinguish between the models but I was hoping
to get some agreement about the models (in the form of programs)
before I proposed it. I'll wait.
I don't suppose you would care to comment on the third experiment I
suggested to distinguish between the models, would you? It seems simple
enough, and although no test can prove any model to be correct, at least if
it turns out that people can push button "3" at the same time as saying
"three", I think it would eliminate your model.
The reason I don't comment on your experiment is because I see no
relationship between it and the models described in the diagrams we
agreed on. I'm attaching what I think was the latest version of those
diagrams.
It's an old version. The output side was modified after Bill

reminded us about his powerful notion that the reference input is
not the value of the output from a higher level, but is the output
of a content-addressable memory addressed by the higher-level
output. Here’s the version I posted using that notion [Martin Taylor
2010.08.11.14.33]:

![psychophysModel_MMT.jpg|816x342](upload://v8MfsNJnfw5TIDaU3wb6lFmn9iO.jpeg)

Here's what I said to accompany this version:

[Martin Taylor 2010.08.11.14.33] Thinking about a possible

discriminative experiment between the models, one could ask the
subject to use two response modes at once. For example, the subject
might be asked to say the answer at the same time as pushing the
button. The subject would have to practice saying and pressing
simultaneously, I assume, but once practiced, it isn’t too difficult
to say “Three” while pushing button 3 (unless the subject is one who
can’t walk and chew gum at the same time); it is much less difficult
than singing and accompanying oneself on the piano, which a lot of
performers do very well. I’m not clear how Rick’s model would
accommodate that kind of multiple simultaneous response, but with
the associative memory reference input, mine would have no problem
with it.

Martin

[From Bill Powers (2010.08.29.0915 MDT)]

Martin Taylor 2010.08.29.10.42 –

Here’s the version I posted
using that notion [Martin Taylor 2010.08.11.14.33]:

Is it the “From experimenter” and “To experimenter”
connections that lead you to say this is not an open-loop model? If so,
the implication would be that the behaving system is varying its
response(s) Rn to bring p[p[S] - A] to its reference level r[p[p[S] - A].
Does this mean that the experimenter varies S as a function of what R the
subject is producing?

If S changes according to a preset schedule, then none of the Rs has any
effect on S, and that loop can’t be considered as closed. The only
remaining closed loops are those from each R back to its corresponding
p[R], and the path from S to any of the Rs is open-loop.

Best,

Bill P.

[Martin Taylor 2010.08.29.11.13]

I begin to remember why I said "I give up". The shifting sands are

moving again.

  [From Bill Powers (2010.078.28.2200 MDTE)]




  Martin Taylor 2010.08.28. 23.37 --
      [From Bill Powers (2010.08.14.1030 MDT)]




      Martin Taylor 2010.08.24.11.04]
          (re: Rick Marken (2010.08.23.1900))
        MMT: Do you have an open-loop model to

serve as a check against your model? Personally, I find it
hard to think of how an open-loop model might look.

      Here's one:




                   other inputs


                        >


                        v


      Stimulus -->[some function] --> ref signal -->

Control system → response

      I think this represents your diagram. "Some function" may

contain internal feedback loops but there is no feedback from
the observed response to “some function” or to the observed
“stimulus”. Overall, this is an S-R system.

    I don't see much relationship between this diagram and my model.

Could you explain where in it the loop representing the dialogue
between subject and experimenter is shown?

  What dialog is that?
Discussed in a long series of messages in mid-February 2009, with a

diagram in [Martin Taylor 2009.02.14.14.32] that has since been
reproduced at least once and I think more than that.

  After giving the instructions, does the experimenter

tell or otherwise indicate to the subject whether each response
was correct? Does the subject have any way of knowing, during the
experiment, whether the responses fulfill the requirements of the
task? Does the experimenter do anything different that the subject
can see that depends on whether the response is right or wrong?

Discussed in a message you commended:

[Martin Taylor 2010.07.05.00.05]

  The "some function" part indicates where the relationship is being

perceived and controlled in imagination, eventually leading to
setting the reference signal for the control systems that actually
produces the overt response.

Interesting that you should relegate the control loop that does the

work to the status of “some function” with “other inputs”. I wonder
why you would describe the main control system that way?

  Am I right, by the way, in assuming that in your model the search

for a suitable answer occurs on every trial, meaning there is no
learning?

How could it be otherwise? The correct answer IS different on every

trial, or rather, there is nothing about an earlier trial that
correlates with what the correct answer is on this trial. So there
is no possibility for learning what the right response should be. If
you are talking about how to give the right response, meaning how to
say “three” when one wants to get across the idea of threeness, most
three-year-olds can do it, so there’s no learning possibility there,
either.

But there is learning, though not about how to give the appropriate

answer. That learning was done partly when the subject was two or
three years old and partly when the experimenter gave the
instructions. The learning that happens during the experiment is
about detecting the tone in the noise. I once did a whole summer’s
worth of detection experiments with two subjects, one of whom did
about a million and a quarter individual individual detections, and
she was still improving her performance ever more closely even at
then end of it. I think she got within about 2 db of idea-observer
performance, whereas a naive listener usually is about 6 db worse
than an ideal observer. So yes, there is learning in the experiment,
but it isn’t about being able to push button “3” or say “three” when
you want to report that the interval in question was the third. As I
said, one learns that before kindergarten, and any decent
experimenter tests that the subject knows that’s what they are
supposed to do before the experiment proper commences
[Martin Taylor 2010.07.05.00.05].

I'm on the point of giving up again. Very little of what I say seems

to have any effect on what you and Rick say about the experiment,
nor does much of what you say seem to have much relevance to
discriminating between the models of how the subject generates the
response. We agree (or at least I hope that’s still true) that
regardless of which response model is correct (if either), the
studies do correctly show the interesting property of the perceptual
input pathway, which is for me the ONLY question of interest and was
the reason I initially commented on Rick’s paper.

With that out of the way, I'm much more interested in things like

the ramifications (which I think are extensive) of the notion that a
reference value is not the output value from a higher-level control
system but is instead the output of a content-addressable memory
addressed by the output of a higher level system.

So, unless future messages build on prior agreements or give reasons

to modify them, I probably won’t respond much more in this thread.

Martin
···
    On 2010/08/24 12:36 PM, Bill Powers wrote:

[Martin Taylor 2010.08.29.11.45]

[From Bill Powers (2010.08.29.0915 MDT)]

  Martin Taylor 2010.08.29.10.42 --
    Here's the version I

posted
using that notion [Martin Taylor 2010.08.11.14.33]:

    ![psychophysModel_MMT.jpg|816x342](upload://v8MfsNJnfw5TIDaU3wb6lFmn9iO.jpeg)
  Is it the "From experimenter" and "To experimenter"

connections that lead you to say this is not an open-loop model?
If so,
the implication would be that the behaving system is varying its
response(s) Rn to bring p[p[S] - A] to its reference level
r[p[p[S] - A].
Does this mean that the experimenter varies S as a function of
what R the
subject is producing?

  If S changes according to a preset schedule, then none of the Rs

has any
effect on S, and that loop can’t be considered as closed. The only
remaining closed loops are those from each R back to its
corresponding
p[R], and the path from S to any of the Rs is open-loop.

That's disingenuous. I am rather inclined to say "inflammatory", but

I won’t go that far right now.

When someone says some behaviour is "open loop" it usually suggests

that the behaviour is a stimulus-response system rather than a
control system. Present the stimulus and the behaviour emerges. To
tell someone who always works with the basic assumption of PCT, that
all intentional behaviour is the output of a control system that is
controlling some perception, that his model is “open loop” seems
like a deliberate attempt to create a very large disturbance.

Rick has been doing it for a year and a half, and I really do find

it annoying. I have tried many ways (reorganizing) to control my
perception that you and Rick perceive me to espouse an S-R model of
behaviour (my reference value is that you should not) but my
reorganizing hitherto has been unable to come any closer to
controlling that perception. Failure of reorganization in the face
of persistent error leads to frustration, and sometimes anger. It’s
not pleasant.

It is obviously true (as you pointed out to Rick when the topic of

psychophysical experiments first came up in February 2009) that the
behaviour of making the response has absolutely no effect on the
presentation. Perception of the stimulus is not the perception that
is controlled by the output. Since the response is intended ONLY for
the experimenter, and would not be emitted if the subject were
simply listening to the successive stimuli for his own enjoyment,
the controlled perception must be a perception of the experimenter
by the subject, which for want of a better guess, I have labelled
“perceive the experimenter to be satisfied that I am doing the right
thing”. One possible real-time feedback pathway for this perception
is the continuance of the sequence of stimuli, which might very well
stop if the experimenter were not satisfied the subject was doing
the right thing.

When you model a tracking experiment, you do not usually model in

any way the hierarchy of control systems that culminate in the
movement of the joystick. You simply model the link between the
output function of the cursor-target distance perception control to
the perceptual input function as a simple connector with an added
disturbance at one point. Nor do you postulate that the controlled
perception of the position of the joystick is fed back into the
perception of the distance between cursor and target. Nevertheless,
when you consider a psychophysical study (which could just as easily
be to determine the ability of the subject to detect whether the
cursor is above or below the target), you make a big deal about the
absolute necessity that the button push perception (or voice
perception or finger-waggle perception) be an element of the
perception controlled when the subject decides whether the target is
above or below the cursor (or that the signal was embedded in the
tome burst). To me this seems perversely inconsistent.

Anyway, as I said in another message a few minutes ago, I find the

question of how the response is produced when the subject has
decided on an answer to be mildly interesting, but not interesting
enough to make it worth my while to control for sustaining prior
understandings in the absence of effective arguments as to why those
agreements should be cancelled.

PCT has a lot more to offer than this.

Martin

[From Rick Marken (2010.08.29.0920)]

Martin Taylor (2010.08.29.10.42)–

Rick Marken (2010.08.28.2230)--
RM: The reason I don't comment on your experiment is because I see no
relationship between it and the models described in the diagrams we
agreed on. I'm attaching what I think was the latest version of those
diagrams.
MT: It's an old version. The output side was modified after Bill

reminded us about his powerful notion that the reference input is
not the value of the output from a higher level, but is the output
of a content-addressable memory addressed by the higher-level
output. Here’s the version I posted using that notion

Here’s what I said to accompany this version:

[Martin Taylor 2010.08.11.14.33] Thinking about a possible

discriminative experiment between the models, one could ask the
subject to use two response modes at once. For example, the subject
might be asked to say the answer at the same time as pushing the
button. The subject would have to practice saying and pressing
simultaneously, I assume, but once practiced, it isn’t too difficult
to say “Three” while pushing button 3 (unless the subject is one who
can’t walk and chew gum at the same time); it is much less difficult
than singing and accompanying oneself on the piano, which a lot of
performers do very well. I’m not clear how Rick’s model would
accommodate that kind of multiple simultaneous response, but with
the associative memory reference input, mine would have no problem
with it.

Again, this is a model of an experiment that hasn’t even been done. If there is data from such an experiment, then present it and show that this model accounts for it. My model would probably accommodate multiple simultaneous responses very much like yours; the only difference would be that the higher level system; the one that controls (in imagination) p[p[A] - A] in your model would control something like p[p[A]-(p[R1]+p[R2]…+p[Rn])] in mine.

But, again, I’m going to stick to the approach I described: getting your and my models to match the behavior in a yes/no detection task and then see if there is a manipulation that will discriminate between the models in terms of observed behavior. In the yes/no detection task there is only one response (R) so the version of your model that I posted – which is a subset of your multiple response model – is the one I use.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.08.29.0930)]

Martin Taylor (2010.08.29.11.45)–

Bill Powers (2010.08.29.0915 MDT)–

  BP: If S changes according to a preset schedule, then none of the Rs

has any
effect on S, and that loop can’t be considered as closed. The only
remaining closed loops are those from each R back to its
corresponding
p[R], and the path from S to any of the Rs is open-loop.

That's disingenuous. I am rather inclined to say "inflammatory", but

I won’t go that far right now.

When someone says some behaviour is "open loop" it usually suggests

that the behaviour is a stimulus-response system rather than a
control system. Present the stimulus and the behaviour emerges. To
tell someone who always works with the basic assumption of PCT, that
all intentional behaviour is the output of a control system that is
controlling some perception, that his model is “open loop” seems
like a deliberate attempt to create a very large disturbance.

Rick has been doing it for a year and a half, and I really do find

it annoying.

I’m sorry. I’ve really been trying to avoid it. That’s why I think doing the modeling is so important. The term “open -loop” in reference to your model seems to upset you a lot so I really have tried to avoid it. I think we may be able to get a better idea of what’s going on in your model if we just build it and compare it to mine in terms of experimental data. Then we can just see what’s going on and not have to just reply on words.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.08.29.1455 MDT)]

Martin Taylor 2010.08.29.11.45 –

BP earlier: If S changes
according to a preset schedule, then none of the Rs has any effect on S,
and that loop can’t be considered as closed. The only remaining closed
loops are those from each R back to its corresponding p[R], and the path
from S to any of the Rs is open-loop.

MMT: That’s disingenuous. I am
rather inclined to say “inflammatory”, but I won’t go that far
right now.

BP: You’d better not, because I don’t see anything disingenuous or
inflammatory in what I said. Tell me what is closed-loop about your
diagram.

MMT: When someone says some
behaviour is “open loop” it usually suggests that the behaviour
is a stimulus-response system rather than a control system. Present
the stimulus and the behaviour emerges. To tell someone who always works
with the basic assumption of PCT, that all intentional behaviour is the
output of a control system that is controlling some perception, that his
model is “open loop” seems like a deliberate attempt to create
a very large disturbance.

BP: Well, it should be a disturbance if you understood what I said,
because in your diagram there is no feedback from R to any preceding
stage of the process except the mechanical stages of producing the
response once its reference signal has been set. There is no feedback
from the response, during the response, to the relationship-controlling
system’s input function. In the imagination phase there is no way for a
disturbance of the relationship to be corrected or even detected, so if
such a disturbance starts while before the apparently correct response
has been selected, the overt response will be the same as it would have
been without a disturbance, and therefore inadequate to correct the
effect of the disturbance. You are assuming that once the reference
signal for the response has been set, it is inevitable that the perceived
response will match it and will be the right response. I don’t think the
real system or the real world is organized that way, unless the
experimenter creates very special cirumstances.

As soon as it goes into the imagination mode, your model is an open-loop
model, an S-R model with a control system as part of its output function:
I challenge you to show me how the loop is closed while the imagined
response is being selected. Just getting indignant is insufficient. Show
me how the loop is closed and I will cheerfully admit that you were right
and I was wrong. We’re both done that before – what’s so special about
this case?

Rather than cite the simple tracking experiment, you should cite the case
in which both the target position and the cursor position are being
perturbed continuously by two uncorrelated disturbances (see Models and
Their Worlds). Now it is impossible to imagine a cursor position that
will correct the error, because while you’re imagining, the cursor is
still being disturbed in changing ways, so the mouse movement you imagine
will be different from the one that is needed.

Rick, this provides a very simple test: make the task that of moving a
pointer on the screen to point to “YES” or “NO”. You
can actually ask the subject to describe the right mouse movement before
moving the mouse, saying “mouse up” or “mouse down.”
Then on a signal the subject is actually to correct the error. Prior to
the signal, the cursor is displayed as if there is no disturbance, or as
if the disturbance is the same as when the trial started. Then when the
signal is given, the real cursor position is shown. I predict that the
subject will make the right direction of response every time, although
the verbal description will be wrong half of the time. The subject will
probably think the skill being tested is prediction of the correct mouse
movement.

I did something like this in “A Cognitive Control System.” The
subject was shown a series of arithmetic problems such as “3 + 7
=” and was asked to move a pointer up and down a column of numbers
from 0 to 100 to indicate the right answer. The mouse movement needed to
do that was different on every trial even if the answer was the same,
because a disturbance was added to the pointer position. The right answer
was given every time but the correlation of mouse position with the
magnitude of the answer was close to zero.

Best,

Bill P.

[From Rick Marken (2010.08.29.1720)]

Bill Powers (2010.08.29.1455 MDT)–

BP: Rick, this provides a very simple test [of Martin’s model]: make the task that of moving a
pointer on the screen to point to “YES” or “NO”. You
can actually ask the subject to describe the right mouse movement before
moving the mouse, saying “mouse up” or “mouse down.”
Then on a signal the subject is actually to correct the error. Prior to
the signal, the cursor is displayed as if there is no disturbance, or as
if the disturbance is the same as when the trial started. Then when the
signal is given, the real cursor position is shown. I predict that the
subject will make the right direction of response every time, although
the verbal description will be wrong half of the time. The subject will
probably think the skill being tested is prediction of the correct mouse
movement.

I agree that this test’s Martin’s model. But we have to see if Martin agrees. And I still think it’s better to test his model in the context of an experiment that is very similar to an existing experiment. That’s why I wanted to do it in the context of a yes/no detection task. I was originally thinking of a reaction time task; I still prefer that but detection will do. But I think it really should be a task that is very similar to a conventional experimental task (as you once said, too, I believe). After all, the goal is to see whether the behavior in conventional experiments is closed loop (as per PCT) or closed loop in imagination (as per Martin). I guess another goal is to show that Martin’s imagination control model is (dare I say it) open loop with respect to the variable disturbed by S in an experiment. But I think it’s important to keep in mind that ultimately it’s about the nature of behavior in experiments.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.08.30.0704 MDT)]

Rick Marken (2010.08.29.1720) --

But I think it really should be a task that is very similar to a conventional experimental task (as you once said, too, I believe).

I agree with that if it's possible, but in Martin's experiment there's no way to know if the selection of the response is taking place in imagination, there being no outward sign that this is happening. I was trying to invite the subject explicitly to imagine the answer first, then convey it through several kinds of response. There's still no way to observe the imagination phase while it's happening, but if the subject starts by moving the pointer the wrong way on some trials, then quickly correct the error and sends the pointer to the right answer, the evidence is somewhat reliable. Saying which way the mouse will move before moving it adds a little more evidence. Interviewing the subject would provide still more.

Many years ago I put together a model of oculomotor control systems. One hypothesis was that saccades involve selecting a target well off of the line of sight, and then blanking out the visual field so the pursuit tracking of the eyes would not remain locked on the background. The lower-gain system for moving the eyes to a target would then be free to move the eye, and then the pursuit system would turn back on, locking the eye to the background. This model is quite faithful to what actually happens.

The way the model was set up, when the subject selected an off-axis target, the position control system would start trying to move the eye. The pursuit system would keep the eye from moving. The disturbance from the position system would be non-zero, though its effects would be divided by the loop gain of the pursuit system (actually the ratio of the loop gains of the two systems). So a contact lens with a mirror mounted on it would enable us to see the small deflection of the eye toward the target point, predicting the direction of the saccade and perhaps even its size. It would be interesting just to watch what happens as the subject selects one off-axis target after another even without actually looking at it.

That's as close as I can come to an experiment that actually measures an effect of covert selection of an action before it's carried out.

Best,

Bill P.

[From Rick Marken (2010.08.30.1700)]

Bill Powers (2010.08.30.0704 MDT)–

Rick Marken (2010.08.29.1720) –

RM: But I think it really should be a task that is very similar to a conventional experimental task (as you once said, too, I believe).

BP: I agree with that if it’s possible, but in Martin’s experiment there’s no way to know if the selection of the response is taking place in imagination, there being no outward sign that this is happening. I was trying to invite the subject explicitly to imagine the answer first, then convey it through several kinds of response.

This is where the modeling might come in handy. My approach would be more like a test for the controlled variable. The main difference between Martin’s and my model is in the fact that my model controls the relationship between perceived stimulus and response (as well as the response itself) while Martin’s model controls only the response. Both models appear to be controlling this relationship variable when the disturbance is only to the response. My spreadsheet shows this clearly. But I think there is a way to disturb the relationship variable in another way so that the disturbance will be effective if Martin’s model is correct and mine is not. We can see how this works in a simulated version of the experiment and then put a person into the same situation and see what happens.

Unfortunately I won’t have much time to work on it this week. But I’m certainly not dropping this. I hope to have something to show by this weekend.

Best

Rick

···


Richard S. Marken PhD

rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2010.08.31.11.30]

Another delayied posting.

[From Rick Marken (2010.08.29.0920)]

My model would probably accommodate multiple simultaneous responses
very much like yours; the only difference would be that the higher
level system; the one that controls (in imagination) p[p[A] - A] in
your model would control something like p[p[A]-(p[R1]+p[R2]...+p[Rn])]
in mine.

What on earth would it mean to add the quantity of button push to the quantity of vocal response to the quantity of finger wagging, and comparing the sum to the decision as to whether there was a signal or not.

I have no idea what it would mean to sum the quantity of the various different kinds of possible response that the experimenter might request in the instructions. To me it sounds simply absurd. For one thing, it would mean that a "correct" response in a 4AFC experiment when the signal was in the second interval could equally well be "press 2 and say 'two'" or "press 1 and say 'three'". The relationship control system would see the same perceptual signal value in either case.

But, again, I'm going to stick to the approach I described: getting
your and my models to match the behavior in a yes/no detection task
and then see if there is a manipulation that will discriminate between
the models in terms of observed behavior. In the yes/no detection task
there is only one response (R) so the version of your model that I
posted -- which is a subset of your multiple response model -- is the
one I use.

I very much doubt you will find any difference between the models unless you do something equivalent to one of the three discriminative experiments I have proposed. Any one of them should distinguish the models, but every time I have proposed a different kind of discriminative experiment you don't want to try it.

It's easy to try my third suggestion without even involving a detection task. Simply set up the display to show, say "1" or "2" randomly on successive trials, and try saying the corresponding number while simultaneously pushing the corresponding numeric key. I don't see how your model would permit that to happen, but it's pretty easy for a human to do.

Actually, you don't even need the display to demonstrate that my model can handle simultaneous responses to an imagined number. Just imagine the concept of "oneness" or "twoness" and do the simultaneous vocalization and button push. 1 2 1 1 2. There. I've just been doing it with no problem at all. If you do that, you have been executing the output part of my model, from the point where the chosen answer has been imagined.

Martin

[From Rick Marken (2010.09.06.1030)]

Martin Taylor (2010.08.31.11.30)–

Rick Marken (2010.08.29.0920)]

RM: My model would probably accommodate multiple simultaneous responses very much like yours; the only difference would be that the higher

level system; the one that controls (in imagination) p[p[A] - A] in

your model would control something like p[p[A]-(p[R1]+p[R2]…+p[Rn])]

in mine.

What on earth would it mean to add the quantity of button push to the quantity of vocal response to the quantity of finger wagging, and comparing the sum to the decision as to whether there was a signal or not.

I shouldn’t have used “+” operators. I meant only that a perception of the many responses to be made would have to be part of the perception of the relationship perception that is controlled at the higher level. I probably should have said that the higher order perception controlled in my model would be something like: p[p[A]-p[R1,R2…+Rn]] where p[A] is the perception of the stimulus. And my model, being a closed loop control model, would not be comparing the perception of the responses to “the decision as to whether there was a signal or not”. It would be perceiving and controlling a perceived relationship between the perception of the stimulus (p[A]) and the responses (p[R1,R2…+Rn]).

But, again, I’m going to stick to the approach I described: getting

your and my models to match the behavior in a yes/no detection task

and then see if there is a manipulation that will discriminate between

the models in terms of observed behavior. In the yes/no detection task

there is only one response (R) so the version of your model that I

posted – which is a subset of your multiple response model – is the

one I use.

I very much doubt you will find any difference between the models unless you do something equivalent to one of the three discriminative experiments I have proposed.

Why don’t we wait until I’ve get the two models implemented. I hope to have something by the end of the day.

It’s easy to try my third suggestion without even involving a detection task. Simply set up the display to show, say “1” or “2” randomly on successive trials, and try saying the corresponding number while simultaneously pushing the corresponding numeric key. I don’t see how your model would permit that to happen, but it’s pretty easy for a human to do.

What you describe here is very similar to demonstrations I use to show my students how control can look like S-R. It’s great that you bring this up because it should remind everyone (who is paying any attention) what this apparently arcane debate is about. The behavior in psychology experiments – particularly the perceptual experiments we’ve been discussing – look S-R. That’s the “behavioral illusion” according to PCT. What’s actually happening (according to PCT) is that subject’s are controlling a perception (the controlled variable) by responding (R) appropriately when the controlled variable is disturbed by S.

You have argued that there is no behavioral illusion in perceptual experiments; that the appearance of S-R is for real and you have developed an S-R model (which is a control model only in the sense that it controls R) to explain what is seen. I have proposed a control model that controls not only R but also the relationship between the perception of R and S. In your model the observed relationship between S and R results from a direct causal like from S to R. I my model, the relationship between S and R is a side effect of controlling the perceived relationship between S and R; the apparent causal connection between S and R is an illusion.

Once we have agreed that the models are what we have proposed and that they behave in a simple experiment as expected then we can go on to develop tests to see which model is correct and, thus, implicitly show whether or not the S-R relationships observed in perceptual experiments reflect a direct causal path from S to R, as suggested by your model, or are a behavioral illusion – a side effect of control of a controlled perceptual variables that is influenced by both S and R.

Actually, you don’t even need the display to demonstrate that my model can handle simultaneous responses to an imagined number. Just imagine the concept of “oneness” or “twoness” and do the simultaneous vocalization and button push. 1 2 1 1 2. There. I’ve just been doing it with no problem at all. If you do that, you have been executing the output part of my model, from the point where the chosen answer has been imagined

Now you are describing the behavior as “output generation”; the simultaneous responses are caused by the imagined concepts, rather than by the “1” vs “2” stimuli. This is certainly a legitimate way to look at the situation. But I won’t buy your explanation until I see the behavior modeled and the model tested. If the S-R or output generation explanations you’ve given survive the appropriate tests then I’ll go back to being an S-R or cognitive theorist. But until then I’ll work on finishing up the models of the yes/no detection task and believing it very likely that the behavior you describe as S-R (or output generation) is a behavioral illusion.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.09.08.1144 MDT)]

Martin Taylor 2010.08.31.11.30 –

I have no idea what it would
mean to sum the quantity of the various different kinds of possible
response that the experimenter might request in the instructions. To me
it sounds simply absurd. For one thing, it would mean that a
“correct” response in a 4AFC experiment when the signal was in
the second interval could equally well be “press 2 and say
‘two’” or “press 1 and say ‘three’”. The relationship
control system would see the same perceptual signal value in either
case.

Only if the perceptual signals from the various sources had frequencies
numerically proportional to the meanings of the associated words. Try
thinking of it this way instead: A good breakfast is given a weight of 6,
a good walk a weight of 9, a pleasant conversation a weight of 11. In
what connection? Perhaps as contributors to a feeling of well-being. Does
this mean we are adding 6 units of breakfast to 9 units of walking to 11
units of conversation? Not at all. It simply says that a perception of
well-being is derived from a number of lower-order perceptions in various
proportions. Being at a different level of perception, its units are not
the same as the units of perception at the lower levels.
I don’t think that “absurd” is much of an explanation without a
description of what is absurd about it.
I’m not feeling back to normal at all yet, so I hope my comments make
sense. What we’re talking about here is really the fundamental difference
between models that convert inputs into outputs and those that use
outputs to control inputs. The thermodynamic arguments, it seems to me,
are input-output based, since they go from lower-order perceptions of
quantities to higher-order perceptions of principles, and from there to
“responses” that are analyzed in low-order terms again. On the
way, we encounter various abstract perceptions such as entropy, which
amount to classifications of lower-order perceptions. But the
interpretation seems to be that the classification and its superordinate
principles is driving the response: that a response occurs because entropy must increase, rather than saying that entropy
increases because (e.g.) it equals dQ/Q which quantity is observed (from
a lower order of observation, without explanation) to increase.

I don’t know how well my thoughts are working, so you’ll have to take it
from there.

Best,

Bill P.