Testing for Control In Experiments

[From Rick Marken (2010.08.04.1145)]

�Martin Taylor (2010.08.04.11.22)--

RM: If this is true then how about proposing a variant of the experiment
that would distinguish the models -- the existing models, not your
model and some imagined new model of mine.

I did that, in two quite different variants. You rejected them for reasons
that are totally obscure to me, just saying "they wont distinguish the
models". As for an "imagined new model" of yours, I have to imagine it
because you haven't yet described it.

But that's the reason why I rejected your proposal. Your proposal (by
your own admission) is testing a model of mine that doesn't even exist
yet. I want to develop a test that will distinguish the models that do
exist. You (and Bill) have posted diagrams of these models and agreed
that the diagrams represent the two different models of the behavior
in a psychophysical experiment. I've asked Bill to re-post those
models but maybe you could do it more quickly. It is those diagrams
that I consider the two different models of behavior in a
psychophysical experiment that are to be tested. The models must make
different predictions about how people will behave if various changes
in the experimental situation are made. That's how I think these
models should be tested: design an experiment where those two models
make different predictions about what will be observed and then _do
the experiment_ to see what is actually observed.

If you want to pursue that topic, I suggest changing the subject line to
"Answering questions about perceptions" or something similar. That's
where our models differ -- not in their ability to provide answers, but in
their methods for achieving this result.

We can see where our models differ by implementing the diagrams of
those models as working computer models. So before we change the
subject line (which is currently " Testing for Control In
Experiments", which is exactly what I think we are doing) let's see
how the models really differ.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.08.05.1220)]

Rick Marken (2010.08.04.1145)

I want to develop a test that will distinguish the models that do
exist. You (and Bill) have posted diagrams of these models and agreed

that the diagrams represent the two different models of the behavior
in a psychophysical experiment. I’ve asked Bill to re-post those
models but maybe you could do it more quickly. It is those diagrams

that I consider the two different models of behavior in a
psychophysical experiment that are to be tested.

OK, I’m apparently talking to myself. But being ignored has never stopped me so I have attached what I think are the two different models of the behavior in a psychophysical task (like the N alternative forced choice detection task) that have been proposed. There are still many details to work out (such as what switches the imagination connection in Martin’s model to the reference for an actual response). But I think the attached diagram correctly captures the basic architecture of the two models.

One model (mine, which I call “Marken”) controls the perception of a relationship between the stimulus, S, (the noise intervals, one of which contains the signal) and the response, R (the subject’s answer): that relationship is the main observable CV in the model. The other model (Martin’s, which I call “Martin” because the difference between our models is as small as the difference between our last and first names) controls a perception of a relationship between S and the imagined answer, which I call R’. This CV is not observable so I have not indicated it as a CV in the diagram. I imagine that once there is no error in the relationship control system in Martin’s model the output of the relationship control system is switched from imagination to control mode, becoming the reference for R, which is the only CV that be observed, according to Martin’s model.

The main difference between the two attached models is in the different hypotheses about the variables controlled in a psychophysical experiment. My model controls the relationship between S and R; Martin’s model controls only R.

If we can get agreement (again!) on what these two models are then we can start developing tests that we can all agree discriminate between the two models .

Best

Rick

···

Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2101.08.06.00.00]

[From Rick Marken (2010.08.05.1220)]

> Rick Marken (2010.08.04.1145)

> I want to develop a test that will distinguish the models that do
> exist. You (and Bill) have posted diagrams of these models and agreed
> that the diagrams represent the two different models of the behavior
> in a psychophysical experiment. I've asked Bill to re-post those
> models but maybe you could do it more quickly. It is those diagrams
> that I consider the two different models of behavior in a
> psychophysical experiment that are to be tested.

OK, I'm apparently talking to myself. But being ignored has never stopped me so I have attached what I think are the two different models of the behavior in a psychophysical task (like the N alternative forced choice detection task) that have been proposed. There are still many details to work out (such as what switches the imagination connection in Martin's model to the reference for an actual response). But I think the attached diagram correctly captures the basic architecture of the two models.

I agree, except that you have the reference for the top-level control in my model as r[S-R] when it should be r[S-R']. Actually, although to call it R' is permissible, it does tend to be misleading, as it suggests that what is controlled involves a surrogate for the physical response, whereas in fact it does not. I'd prefer you to replace "R'" with "A" for "Answer" in the Martin model.

The main difference between the two attached models is in the different hypotheses about the variables controlled in a psychophysical experiment. My model controls the relationship between S and R; Martin's model controls only R.

No. My model controls the relationship between S and A (what you call R') as well as controlling R so that p(R) has the reference value "Ao" where Ao is the value of A when the relationship control has no error.

"What switches the imagination connection in Martin's model to the reference for an actual response" is part of the output side of the control concerned with dialogue with the experimenter, since only when the experimenter wants to see a response does the switch get thrown. In my model, the subject can generate the answer for himself or for later reference whether or not the experimenter wants to see a response now. In your model the subject can't do that.

Martin

[Martin Taylor 2010.08.06.0028]

I said...

[Martin Taylor 2101.08.06.00.00]

"What switches the imagination connection in Martin's model to the reference for an actual response" is part of the output side of the control concerned with dialogue with the experimenter, since only when the experimenter wants to see a response does the switch get thrown.

Just to go back the the Schouten experiment for a moment, in that experiment the third "bip" is when the experimenter wants to see the response. The switch is thrown when the third "bip" happens, regardless of whether the relationship control has reached a stable "no error" state. The overt response is whatever the state of the "A" variable might be at that moment.

Martin

[From Rick Marken (2010.08.05.2200)]

�Martin Taylor (2101.08.06.00.00) writing from the next century;-)

Thanks so much for the reply, Martin. I thought I had wasted the whole
morning making those pretty diagrams.

RM: But I think the attached diagram
correctly captures the basic architecture of the two models.

MT: I agree, except that you have the reference for the top-level control
in my model as r[S-R] when it should be r[S-R'].

You're right. I left the apostrophe off of the R. I'll correct that.

MT: Actually, although to call it R'
is permissible, it does tend to be misleading, as it suggests that what is
controlled involves a surrogate for the physical response, whereas in fact
it does not. I'd prefer you to replace "R'" with "A" for "Answer" in the
Martin model.

I'll use A in both models if you like, since I mean the same thing by
"R" in my model as you do in yours. Since "R" in both models is shown
to be the intended result of physical outputs (the output box labeled
"O"), like saying or writing the number "3" to indicate that the tone
occurred in interval 3, I thought it would be clear that the "R" in
both models is the subject's "answer". But I'll change all the "R"s to
"A"s to make this clear.

RM: The main difference between the two attached models is in the
different hypotheses about the variables controlled in a psychophysical
experiment. My model controls the relationship between S and R;
Martin's model controls only R.

MT: No. My model controls the relationship between S and A (what you
call R') as well as controlling R so that p(R) has the reference value "Ao"
where Ao is the value of A when the relationship control has no error.

Now I'm confused. Do you just want to call the imagined answer "A" and
leave the actual overt answer as "R"? That's OK with me but it seems
like it could be a bit confusing.

MT: "What switches the imagination connection in Martin's model to the
reference for an actual response" is part of the output side of the control
concerned with dialogue with the experimenter, since only when the
experimenter wants to see a response does the switch get thrown.

OK, then I need to add another control system that is controlling for
switching from imagination to control mode based on a signal from the
experimenter. But then how does your model account for the behavior in
experiments where the subject is to respond "on his own" in the
interval between stimulus presentations? In my own research the
subject was to write down their answer as soon as possible after the
stimulus was presented; there was no request for an answer from the
experimenter. Does your model deal only with the situation where the
subject is signaled to respond after the stimulus presentation? I
think I'll leave off the "experimenter request" control system for
now; we can put it in later if you really want it.

I'll get a revised version of the models to you soon to see if we're
on the right track. There are many more details to be handled before
we have working versions of the models but I just want to be sure the
basic architecture of the two models is correct.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.08.05.2230)]

Rick Marken (2010.08.05.2200) to Martin Taylor (2101.08.06.00.00)

I'll get a revised version of the models to you soon to see if we're
on the right track.

And here they are. I added a few more labels for clarity. I hope
we're getting closer.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Content-Type: image/gif; name="Models2.gif"
Content-Disposition: attachment; filename="Models2.gif"
X-Attachment-Id: f_gciln2cm0

(Gavin Ritz 2010-08-06.22.33NZT)

([From Rick Marken
(2010.08.05.2230)]

Rick Marken
(2010.08.05.2200) to Martin Taylor
(2101.08.06.00.00)

I’ll get a revised version of the models to you
soon to see if we’re

on the right track.

And here they are. I added a few more labels for
clarity. I hope

we’re getting closer.

Rick

Can you please show me
where in the higher levels (above level 4 in HPCT) where the perceptual input
function lies in the brain? What equivalent function in the human body or brain
represents the higher perceptual function? What neural pathways and functions can
be defined as the perceptual input function.

The lowest one is pretty
easy to define, it’s all the main sensory organs where the transduction begins
of external energies.

Regards

Gavin

I suggest that you two keep discussing this model and diagram until you
both agree that you are both talking about the same things. Right now you
are not. The disgreements are meaningless. When Rick says something is
controlled, he means really controlled, out where the Test would
find it being controlled. Martin means only that a control system is
acting, regardless of whether an external observer can see it acting. You
are both being sloppy and imprecise in your use of language. You think
you’re having a discussion but you are only sending words back and forth
without checking to see what meanings they are being given at the
receiving end. Enough!

Best,

Bill

···

At 12:12 AM 8/6/2010 -0400, you wrote:

[Martin Taylor
2101.08.06.00.00]

[From Rick Marken
(2010.08.05.1220)]

[From Bill Powers (2010.08.06.0620 MDT)]

> Rick Marken (2010.08.05.2200) to Martin Taylor (2101.08.06.00.00)

> I'll get a revised version of the models to you soon to see if we're
> on the right track.

And here they are. I added a few more labels for clarity. I hope
we're getting closer.

May I suggest that in Martin's diagram, a switch be added to the crossed circle at the output of the second-order system? As drawn, the diagram suggests that while the correct answer is being searched for, each trial value becomes a reference signal for the lower system which generates output to create it as a perceptual signal. If you intend for the output of the lower system to occur only after the right answer is selected and the imagination mode is removed, the switch is necessary. A simple symbol for a single-pole double-throw switch is

          o-------
        /
-----O

          o-------

with or without an arrow at the movable end.

Either that, or you would need a rather complicated hypothesis about switching the lower level output function off while imagining and on while actually producing the overt answer. Of course you have to define the state of the answer when the reference signal for it is zero. Experimenter: "Did you press button A?" Subject: "No, I haven't pressed any button yet." Experiment: "But look, the chart shows A being pressed," Subject: "Well, maybe my sleeve brushed against the button, but I didn't press it." Experimenter: "How do you know you didn't press it? There's no signal going to a higher system to indicate whether you did or didn't, and this isn't your spinal cord talking." Subject: "Oh, you're right. Sorry. I guess I must have pressed the button, but I didn't mean to."

Glad to see you guys sticking with it until you agree on what you're talking about.

Best,

Bill P.

[From Bill Powers (2010.08.06.0658 MDT)]

Gavin Ritz
2010-08-06.22.33NZT –

Rick

Can you please show me where in the higher levels (above level 4 in HPCT)
where the perceptual input function lies in the brain? What equivalent
function in the human body or brain represents the higher perceptual
function? What neural pathways and functions can be defined as the
perceptual input function.

I’ll answer this one since the ideas came from my studies of
neuroanatomy. I trust the basic ideas haven’t changed too much since then
at least at the level of what is connected to what.

The short answer is “sensory nuclei” at the lower levels and
“layered columns” in the cortex. Sensory nuclei are dense
networks of connections into which sensory signals come from below and
out of which signals go both to higher centers and, via
“collaterals”, to motor nuclei at the same level (which seem
also to contain the comparators where downgoing signals of one sign meet
oppositely-signed signals entering from the collaterals). See my Byte
articles for the relationship between our usual way of drawing the
hierarchy and the way it is probably physically organized in the
brain.

In the cortex, the general picture is a set of columns that go from the
inner surface of the cortex, up through the cortex, and to the outer
surface. Groups of columns seem to go with different sensory modalities.
Within each column there appear to be layers which might correspond to
levels of control, though above the inner layer, the basal ganglia, the
connections get very complicated. Little is known about the details at
the higher levels; I just assumed that the architecture is similar to
what is (better) known at the midbrain and lower levels. All subject to
revision, of course, as neurology becomes more able to test hypothesis
about brain function. It can’t do a lot for us right now.

When you asked about “the perceptual input function,” I trust
you were speaking generically, not about one single perceptual input
function. There are probably thousands of perceptual input functions,
each producing just one perceptual signal representing one kind of
perception. The Byte article shows that these input functions are
probably all close together at each level, so there can be interactions
among them, but they can still be represented as separate functions with,
perhaps, redundant computations in them to take care of the interactions.
See Figures 3.11 and 3.12 in B:CP.

Best,

Bill P.

[From Bill Powers (2010.08.06.0658 MDT)]

Gavin Ritz
2010-08-06.22.33NZT –

Rick

Can you please show me where in the higher levels (above level 4 in HPCT)
where the perceptual input function lies in the brain? What equivalent
function in the human body or brain represents the higher perceptual
function? What neural pathways and functions can be defined as the
perceptual input function.

I’ll answer this one since the ideas came from my studies of
neuroanatomy. I trust the basic ideas haven’t changed too much since then
at least at the level of what is connected to what.

The short answer is “sensory nuclei” at the lower levels and
“layered columns” in the cortex. Sensory nuclei are dense
networks of connections into which sensory signals come from below and
out of which signals go both to higher centers and, via
“collaterals”, to motor nuclei at the same level (which seem
also to contain the comparators where downgoing signals of one sign meet
oppositely-signed signals entering from the collaterals). See my Byte
articles for the relationship between our usual way of drawing the
hierarchy and the way it is probably physically organized in the
brain.

In the cortex, the general picture is a set of columns that go from the
inner surface of the cortex, up through the cortex, and to the outer
surface. Groups of columns seem to go with different sensory modalities.
Within each column there appear to be layers which might correspond to
levels of control, though above the inner layer, the basal ganglia, the
connections get very complicated. Little is known about the details at
the higher levels; I just assumed that the architecture is similar to
what is (better) known at the midbrain and lower levels. All subject to
revision, of course, as neurology becomes more able to test hypothesis
about brain function. It can’t do a lot for us right now.

When you asked about “the perceptual input function,” I trust
you were speaking generically, not about one single perceptual input
function. There are probably thousands of perceptual input functions,
each producing just one perceptual signal representing one kind of
perception. The Byte article shows that these input functions are
probably all close together at each level, so there can be interactions
among them, but they can still be represented as separate functions with,
perhaps, redundant computations in them to take care of the interactions.
See Figures 3.11 and 3.12 in B:CP.

Best,

Bill P.

[Martin Taylor 2010.08.06.10.46]

[From Rick Marken (2010.08.05.2200)]
Martin Taylor (2101.08.06.00.00) writing from the next century;-)
Always ahead of the game!
Thanks so much for the reply, Martin. I thought I had wasted the whole
morning making those pretty diagrams.
Not at all. They are a different drawing style from mine, but they

show very well what we both intend.

MT: Actually, although to call it R'
is permissible, it does tend to be misleading, as it suggests that what is
controlled involves a surrogate for the physical response, whereas in fact
it does not. I'd prefer you to replace "R'" with "A" for "Answer" in the
Martin model.
I'll use A in both models if you like, since I mean the same thing by
"R" in my model as you do in yours.
No. We mean the same by R in both models. I want A instead of R' to

emphasise that A is NOT the same as R. “A” is a number, or the
abstract perception that corresponds to the idea of a number,
whereas R might be a button push or a vocalization, or the showing
of three fingers, or a succession of taps (think “Clever Hans”).

Since "R" in both models is shown
to be the intended result of physical outputs (the output box labeled
"O"), like saying or writing the number "3" to indicate that the tone
occurred in interval 3, I thought it would be clear that the "R" in
both models is the subject's "answer".
No, in both models it is the subject's "response" (something

observable by the experimenter). That is not what is related to the
perceived stimulus in the relationship control system of my model.
In my model, the subject determines the answer, and if the subject
is controlling for perceiving the experimenter to be able to know
the answer, then the subject makes a response, which can be thought
of as a translation of the answer into some observable form, in the
same way that I am now translating what I am trying to get across to
you into keystrokes that are translated into letters on my screen –
and with luck, later on your screen. My thought is not those
keystrokes, and the subject’s answer is not the subject’s response

But I'll change all the "R"s to
"A"s to make this clear.
 RM: The main difference between the two attached models is in the
different hypotheses about the variables controlled in a psychophysical
experiment. My model controls the relationship between S and R;
Martin's model controls only R.
MT: No. My model controls the relationship between S and A (what you
call R') as well as controlling R so that p(R) has the reference value "Ao"
where Ao is the value of A when the relationship control has no error.
Now I'm confused. Do you just want to call the imagined answer "A" and
leave the actual overt answer as "R"? That's OK with me but it seems
like it could be a bit confusing.
Yes, as I explained above.

Think of it by analogy with a tracking study using a joystick. The

subject is trying to control a perception of the relationship
between cursor and target. That’s a perceived distance. The output
generated by way of several levels of control is a muscular action
that alters the angle of a joystick. That angle is visible to
anyone. It is not the perceived distance between cursor and target,
nor is it the subject’s perceived location of the cursor on the
screen. They are different, just as (in my model) the subject’s
“answer” is different from the overt response.


MT: "What switches the imagination connection in Martin's model to the
reference for an actual response" is part of the output side of the control
concerned with dialogue with the experimenter, since only when the
experimenter wants to see a response does the switch get thrown.
OK, then I need to add another control system that is controlling for
switching from imagination to control mode based on a signal from the
experimenter. But then how does your model account for the behavior in
experiments where the subject is to respond "on his own" in the
interval between stimulus presentations?
Only in a Schouten-style experiment does the subject need a signal

from the experimenter. I’ve done such experiments, delaying the
instruction to the subject about which of four possible
discriminations to make, but usually the there is no such signal.

I included "when" to take account of the situation in the two

experiments I proposed to discriminate between the models. In one,
the experimenter does not request a response and the subject does
not make one for most of the trials, but the subject has to generate
the “answer” in case the experimenter later does request a response.
In the other, the subject makes a response but only after the
following trial, and that response is of a different nature from the
“answer” – the “answer” being a number between 1 and 4 for each
trial, but the response being “Same” or “Different” when the answers
for successive trials are compared.

Normally, as I described it initially, the subject is controlling

for perceiving the experimenter to be satisfied, which is achieved
by trying to follow instructions. If the subject perceives the
instructions as demanding a response as fast as possible after each
trial, that’s when the subject will “throw the switch”. If the
subject perceives the instructions as requiring no response unless
there is a red flag, then the switch will be thrown when the subject
perceives the red flag.

Here's my Feb 14 sketch of the relationship between the Experimenter

and the Subject. If it were to be turned into a model, it would need
a lot of fleshing out, but I think it gives the general idea that
the subject’s actions in the experiment are just a pathway in the
control loop in which the subject controls for perceiving the
Experimenter to be satisfied. That loop is green in the figure. The
blue loop is the experimenter’s control of the running of the
experiment, using disturbances to the subject’s controlled
relationship between stimulus and “answer” to induce actions that
complete his control loop. The red pathway is the one whose
properties the experimenter is trying to discover. The dashed arcs
connecting subject and experimenter indicate virtual pathways that
are physically implemented by way of the solid arrow paths.

This diagram does not include the loops that have so concerned you,

of how the output of the response relates to the relationship
control system. Nor does it include the “switch” required by my
model. You could replace the bottom half of the “Subject” panel in
this diagram with either your model or mine. The sketch shows the
first version of my model, but without the “switch” which I had
taken for granted without making it explicit.

![Experimenter_Subject.jpg|474x448](upload://fN6D9Ko0FehDKEijLYuJhTBd4AP.jpeg)

Martin
···

On 2010/08/6 1:00 AM, Richard Marken wrote:

[From Rick Marken (2010.08.06.1100)]

Bill Powers (2010.08.06.0620 MDT)

May I suggest that in Martin's diagram, a switch be added to the crossed
circle at the output of the second-order system?

I was going to wait on proposing that until we were completely agreed
on the basic architecture (I see from glancing at the next post from
Martin that we still might not be). But thanks for the tips. I will
amend the diagram appropriately and see if it flies.

Glad to see you guys sticking with it until you agree on what you're talking
about.

I plan to stick with it until we are able to agree on an experimental
test. Even if that is until the end of time, which it very well may
be. If I don't get agreement within some reasonable period (like a
month or so) I'll just go ahead and propose a test of my model only
and just see if I can get agreement with you on it.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.08.06.1230)]

Martin Taylor (2010.08.06.10.46)–

Rick Marken (2010.08.05.2200)
RM: I'll use A in both models if you like, since I mean the same thing by
"R" in my model as you do in yours.
MT: No. We mean the same by R in both models. I want A instead of R' to

emphasise that A is NOT the same as R. “A” is a number, or the
abstract perception that corresponds to the idea of a number,
whereas R might be a button push or a vocalization, or the showing
of three fingers, or a succession of taps (think “Clever Hans”).

I think what the variables represent depends on their role in the model. I’ve tried to clarify things in a newer version (Models3, attached), which also incorporates Bill’s suggested switch, which I copied from the memory model in B:CP. In both your model and mine r[A] is the reference specification for a perception of the answer that the subject is to produce, which could be a spoken number, a written number, a button press, etc. In both your model and mine the environmental variable A represents the physical variables from which the perception of A, p[A] is derived. So A could be sound waves (if the answer is to be a spoken number), lines on paper (if the answer is to be a written number) or a change in finger position (if the answer is to be a button press). Variable A represents the physical result of muscle forces (the output of O) that produce the intended answer (specified by r[A]), the type of answer that the experimenter has asked the subject to produce.

I think this current version of the models makes it clear that the only difference between our models is that your model spends at least part of a trial producing the intended answer (r[A]) only in imagination. This happens when, as in the diagram, the switch guiding the output of the system controlling the relationship between stimulus and answer (S-A) sends that output (r[A]) right back into the system that generated it, through it’s perceptual function. So both of our models control p[S-A], but in your model the A that is being controlled is the imagined answer (which is the answer specified by the S-A control system, r[A]) that will be produced when the experimenter asks for it, at which point the switch from the system controlling p[S-A] will go to the lower level system controlling for perceiving the answer, p[A].

So let me know if this current iteration is on the right track.

Best

Rick

PS. Bill, please feel free to chime in on this at any time.

···


Richard S. Marken PhD

rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2010.08.06.17.01]

[From Rick Marken (2010.08.06.1230)]

    I think this current version of the

models makes it clear that the only difference between our
models is that your model spends at least part of a trial
producing the intended answer (r[A]) only in imagination. This
happens when, as in the diagram, the switch guiding the output
of the system controlling the relationship between stimulus and
answer (S-A) sends that output (r[A]) right back into the system
that generated it, through it’s perceptual function. So both of
our models control p[S-A], but in your model the A that is being
controlled is the imagined answer (which is the answer specified
by the S-A control system, r[A]) that will be produced when the
experimenter asks for it, at which point the switch from the
system controlling p[S-A] will go to the lower level system
controlling for perceiving the answer, p[A].

So let me know if this current iteration is on the right track.

I do have some quibbles with the labelling, which are the same as

what I mentioned earlier.

I don't care how you label in your model, but I do want the overt

response variable in my model to have a DIFFERENT label from the
answer variable. I suggested using R, as you originally had it, for
the response in each model, but changing your original R’ to A in my
model. Now you have changed everything to A, which completely
negates the point of making it crystal clear that they are not just
horses of different colours, but different kinds of animals!

There were a couple of other points, such as that after S passes

through a perceptual function, it is no longer S, but p[S], and p
is what the experimenter is trying to determine, so I’ve tried to
redraw my model using your drawing style and notation style, leaving
your model drawing alone. I hope I didn’t miss something important.

![psychophysModels.jpg|854x429](upload://vvP4FlSj0eGnPiV8hfDa5QThrbc.jpeg)

I think we seem to be getting there.

Martin
···

On 2010/08/6 3:36 PM, Richard Marken wrote:

[From Rick Marken (2010.08.06.1630)]

Martin Taylor (2010.08.06.17.01) –

Rick Marken (2010.08.06.1230)]

RM: So let me know if this current iteration is on the right track.

MT: I do have some quibbles with the labelling, which are the same as

what I mentioned earlier.

RM: Yes, I like your changes to the names of the perceptual signals. I don’t really understand why you want to call the imagined response signal “A” and the perceived response signal “p[R]”. After all, what you call “A” is functionally just the reference signal for p[R], but with the imagination “switch” thrown so that this signal directly enters the perceptual function that perceives the relationship between the stimulus and response (as perceived). I also don’t agree with calling p[p[S]-A] a controlled variable. Bill addressed this in his comments earlier. A controlled variable (in PCT) has an environmental correlate, the controlled quantity, which can be detected by an experimenter. There is no environmental correlate of p[pS]-A] (because A exists only in the subject’s imagination) so it’s really not a controlled variable.

MT: I don't care how you label in your model, but I do want the overt

response variable in my model to have a DIFFERENT label from the
answer variable.

I think it’s important to give the same variables in the two models the same names. I believe the variable labeled “A” in the diagram my model is precisely the same as the variable you label “R” in your diagram of yours: they are both the answer that the subject is observed to make in the experiment. If this is not the case, then our models are not models of the same behavior and, therefore, not comparable.

But I don’t want to argue about variable names. If you accept the two diagrams are being reasonable descriptions of the architecture of our two models then I think the next step is to implement them as working models of a detection experiment. I would recommend starting with a simple yes/no detection task rather than a forced choice task. I can’t think of a way to model an N alternative forced choice detection task without having N control systems like the ones shown in our models, with some mutual inhibition between them for when more than one system detects a tone. I could get complex fast.

I suggest that we model an experiment where on each trial there is a burst of noise that does or does not contain a tone. The subject’s task is to say “yes” if a tone is present and “no” otherwise. I think both of our models could be applied to this task quite easily.

MT: I think we seem to be getting there.

I agree. So let’s start building.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

(Gavin Ritz 2010.08.07.13.17NZT)

[From Bill Powers
(2010.08.06.0658 MDT)]

Gavin
Ritz 2010-08-06.22.33NZT –

Rick

Can you please show me where in the higher levels (above level 4 in HPCT) where
the perceptual input function lies in the brain? What equivalent function in
the human body or brain represents the higher perceptual function? What neural
pathways and functions can be defined as the perceptual input function.

I’ll answer this one since the ideas came from my studies of neuroanatomy. I
trust the basic ideas haven’t changed too much since then at least at the level
of what is connected to what.

The short answer is “sensory nuclei” at the lower levels and
“layered columns” in the cortex.

I would like to get good drawings of these
please, preferably models.

Sensory nuclei are dense
networks of connections into which sensory signals come from below and out of
which signals go both to higher centers and, via “collaterals”, to
motor nuclei at the same level (which seem also to contain the comparators
where downgoing signals of one sign meet oppositely-signed signals entering
from the collaterals).

See my Byte articles

I’ve haven’t read the Byte article
may I please have a copy of this or send me to the source to purchase.

for the relationship
between our usual way of drawing the hierarchy and the way it is probably
physically organized in the brain.

In the cortex, the general picture is a set of columns that go from the inner
surface of the cortex, up through the cortex, and to the outer surface. Groups
of columns seem to go with different sensory modalities. Within each column
there appear to be layers which might correspond to levels of control, though
above the inner layer, the basal ganglia, the connections get very complicated.

Is this what you are talking about?
Attached drawing.

Little is known
about the details at the higher levels;

Yes that part has become very clear to.

I just assumed that the
architecture is similar to what is (better) known at the midbrain and lower
levels. All subject to revision, of course, as neurology becomes more able to
test hypothesis about brain function.

It can’t do a lot for us
right now.

I’m not so sure about this.

When you asked about “the perceptual input function,” I trust you
were speaking generically, not about one single perceptual input function.

I was hoping that we wouldn’t be
going here again. I have spent hundreds of hours on your theory; I expect none
of these types of comments.

There are probably
thousands of perceptual input functions, each producing just one perceptual
signal representing one kind of perception.

Maybe millions.

The Byte article shows
that these input functions are probably all close together at each level, so
there can be interactions among them, but they can still be represented as
separate functions with, perhaps, redundant computations in them to take care
of the interactions. See Figures 3.11 and 3.12 in B:CP.

Not very helpful diagrams. I’m
looking for the actual brain structure and functions as it relates to PCT
models.

The more I look at PCT at the higher
levels the more convinced that using lower levels type modeling is probably not
going to suffice. PCT shows actually a good relationship with science at the
input function level at the lowest level. (where the rubber meets the road so
to speak)

All modalities in PCT line up with science,
pressure, temp, light, chemical etc. So good math there but as soon as we go up
levels it runs into the same brick wall as science. Human organism requisitely live
in qualitative notions. Hence my previous comments you cannot create a Picasso out of 5 stones. Okay so
the stones have a sensation (roughness), a smell, a shape (config) a temperature,
color (electromagnetic) but like all models lack the qualitative aspect. The stone
may well represent something else un-capturable, shaman’s powers, beauty,
secret weapon, talisman, exchange system etc etc.

Science is absolutely connected to math’s
they are embraced totally and completely. And even our counting system exhibits
spiral causation (circular causation is just one aspect of spiral causation) aspects
totally not yet understood.

As you said before an almost impossible task,
I couldn’t agree more with you.

Regards

Gavin

CerebCircuit.png

[Martin Taylor 2010.08.06.23.09]

[From Rick Marken (2010.08.06.1630)]

        Martin Taylor

(2010.08.06.17.01) –

Rick Marken (2010.08.06.1230)]

            RM: So let me know if this current iteration is on

the right track.

        MT: I do have some quibbles with the labelling, which are

the same as what I mentioned earlier.

      RM: Yes, I like your changes to the names of the perceptual

signals. I don’t really understand why you want to call the
imagined response signal “A” and the perceived response
signal “p[R]”.

Because A and R are of completely different character, as I have

pointed out over and over again. You keep wanting to call them the
same thing, because in your model they are. In my model they are
not. If you wanted to draw a complete circuit diagram for the model,
you would have a lot more output stages between A and R than the
single one shown in both our models. Just as in a tracking
experiment where the conventional diagram shows just a simple line
between the output of the control system controlling
p(Cursor-Target), instead of a whole complex of control systems to
the top of which the C-T control system sends a reference signal, so
in our models we make one simple control loop take the place of many
whose end result is to produce R.

      After all, what you call "A" is functionally just the

reference signal for p[R], but with the imagination “switch”
thrown so that this signal directly enters the perceptual
function that perceives the relationship between the stimulus
and response (as perceived).

Not quite. A is a variable but r[R] is its value at some moment --

usually the moment when the p[p[S] - A] control system has arrived
at zero error.

      I also don't agree with calling p[p[S]-A] a controlled

variable.

Of course it is a controlled variable. Just as with any control

system, the output signal influences the perceptual signal, and
changes its value to approach its reference value. What’s not a
controlled variable about that?

      Bill addressed this in his comments earlier. A controlled

variable (in PCT) has an environmental correlate, the
controlled quantity, which can be detected by an experimenter.

The controlled quantity is ALWAYS a perception, never an

environmental variable. That’s PCT 101, first lesson. And as Bill
corrected me in the partner thread, the output of a control unit
knows or cares nothing about where its signal goes. Control exists
if the output influences the perception. If the influence of the
output reaches the perception through the external environment, so
be it. If the influence of the output reaches the perception by way
of an internal connection we call imagination, so be it. If the
influence of the output reaches the perception both through the
external environment and through internal pathways, so be it. Any
which way, it’s irrelevant to the control unit. Control through an
internal feedback pathway is likely to be orders of magnitude
faster, than through a pathway through the external environment, and
that might affect the stability of the control unit, but otherwise,
it’s irrelevant.

The way you use "(in PCT)" strongly suggests that "PCT" is what some

authority says it is, like a religion, rather than what analysis and
experiment discover it to be, like a science. I don’t go along with
that. And in any case, I reiterate my reminder to you that it is
“PERCEPTUAL Control Theory”, not “Environmental Variable Control
Theory”.

      There is no environmental correlate of p[pS]-A] (because A

exists only in the subject’s imagination) so it’s really not a
controlled variable.

The part before "so..." is true; the part after "...so" is not.
        MT: I don't care how

you label in your model, but I do want the overt response
variable in my model to have a DIFFERENT label from the
answer variable.

      I think it's important to give the same variables in the two

models the same names. I believe the variable labeled “A” in
the diagram my model is precisely the same as the variable you
label “R” in your diagram of yours: they are both the answer
that the subject is observed to make in the experiment.

That's correct, but I didn't redraw your model or revise its

labelling, since the model is yours. I’d like it better if you used
the label R in your model, because your model really doesn’t have
anything equivalent to my variable A.

      But I don't want to argue about variable names.  If you accept

the two diagrams are being reasonable descriptions of the
architecture of our two models then I think the next step is
to implement them as working models of a detection
experiment. I would recommend starting with a simple yes/no
detection task rather than a forced choice task.

I can't see a problem with that. The controlled variable, and

possibly its level in the perceptual hierarchy, might be different,
but that shouldn’t affect the question at hand.

Having modelled it, what do you do then to discriminate between the

models?

      I can't think of a way to model an N alternative forced

choice detection task without having N control systems like
the ones shown in our models, with some mutual inhibition
between them for when more than one system detects a tone. I
could get complex fast.

I don't think it's as complex as you suggest. I think the difference

should be handled in the perceptual system, in the part represented
by the p[S] box in our diagrams. In an N-AFC experiment, the p[S]
function has to have at least two stages. The first provides some
index of the likelihood that each interval in turn contains a
signal, and the second is a sequence-level perception. Sequences
“High Low Low Low” and “Low Low High Low” are different, so either
my “A” would need to be an imagined sequence or there would need to
be a third perceptual stage in which the sequences were translated
into category perceptions “First” or “Second” or… “Nth” and the
relationship control done at the category level. I rather prefer
this second possibility, but I don’t know how one could distinguish
them experimentally.

Remember from your psychophysicist days that in an N-alternative

forced choice experiment, the subject quite often would not have
said “Yes” to the signal having been in any one of the alternatives,
had it been presented alone. What the subject does is to compare the
intervals, and assess which interval is most likely to have
contained the signal, knowing that at least one interval did.
[Aside: very early in my career we did some 2AUC experiments (2
alternative unforced choice) in which the subject was not required
to choose if she was unhappy with making a choice. We made up and
published tables of d’ especially for this case. I don’t remember
the results as being very different from the forced-choice
experiment, but it was a long time ago.]

      I suggest that we model an experiment where on each trial

there is a burst of noise that does or does not contain a
tone. The subject’s task is to say “yes” if a tone is present
and “no” otherwise. I think both of our models could be
applied to this task quite easily.

Yes, but what is the experiment that distinguishes between the

models? What data would be collected, and what characteristics of
the data would favour one model over the other?

Martin

[From Bill Powers (2010.08.07.0145 MDT)]

Gavin Ritz 2010.08.07.13.17NZT –

BP: Here is a link to Dag’s web page where the Byte articles can be found
and downloaded. The diagrams I mention are in the third
installment.

[
http://www.livingcontrolsystems.com/intro_papers/bill_pct.html

](http://www.livingcontrolsystems.com/intro_papers/bill_pct.html)

BP: In the cortex, the general
picture is a set of columns that go from the inner surface of the cortex,
up through the cortex, and to the outer surface. Groups of columns seem
to go with different sensory modalities. Within each column there appear
to be layers which might correspond to levels of control, though above
the inner layer, the basal ganglia, the connections get very
complicated.

Is this what
you are talking about? Attached drawing.

No, that’s a diagram of the cerebellum, a lower-order structure at the
base of the brain that appears to handle dynamic stability in motor
behavior. The basal ganglia are just beneath the cerebrum and above the
thalamus. I presented a proposed model of the cerebellum at the 1994 CSG
meeting in Wales, but nobody got very interested in it – the math was
probably a bit too arcane. It was published in the Proceedings of that
meeting by the University of Aberystwyth, but I don’t seem to have any
records of it left.

I had a great diagram of the columns but can’t find it again. My hard
disk is a disorganized mess.

Little is known about the
details at the higher levels;

Yes that part
has become very clear to.

I just assumed that the
architecture is similar to what is (better) known at the midbrain and
lower levels. All subject to revision, of course, as neurology becomes
more able to test hypothesis about brain function.

It can’t do a lot for us right
now.

I’m not so
sure about this.

When you asked about “the perceptual input function,” I trust
you were speaking generically, not about one single perceptual input
function.

I was hoping
that we wouldn’t be going here again. I have spent hundreds of hours on
your theory; I expect none of these types of
comments.

I’m not a service organization, I’m just me. Get over it. Or manage your
language more carefully.

There are probably thousands of
perceptual input functions, each producing just one perceptual signal
representing one kind of perception.

Maybe
millions.

The Byte article shows that
these input functions are probably all close together at each level, so
there can be interactions among them, but they can still be represented
as separate functions with, perhaps, redundant computations in them to
take care of the interactions. See Figures 3.11 and 3.12 in B:CP.

Not very
helpful diagrams. I’m looking for the actual brain structure and
functions as it relates to PCT models.
The more I look at PCT at the
higher levels the more convinced that using lower levels type modeling is
probably not going to suffice. PCT shows actually a good relationship
with science at the input function level at the lowest level. (where the
rubber meets the road so to speak)
All modalities in PCT line up
with science, pressure, temp, light, chemical etc. So good math there but
as soon as we go up levels it runs into the same brick wall as science.
Human organism requisitely live in qualitative notions. Hence my previous
comments you cannot create a Picasso out of 5 stones. Okay so the stones
have a sensation (roughness), a smell, a shape (config) a temperature,
color (electromagnetic) but like all models lack the qualitative aspect.
The stone may well represent something else un-capturable, shaman’s
powers, beauty, secret weapon, talisman, exchange system etc
etc.

So is everybody, but the information doesn’t exist. Come back in 200
years.

The higher functions are all based strictly on subjective experience. I
do logical thinking and run other sorts of mental programs, so that’s a
level in the model. How it is implemented in the brain I don’t know. The
higher levels, of course, all have to act by using the lower levels.

If such things exist. Nothing is “uncapturable” after you know
how to capture it.

Science is
absolutely connected to math’s they are embraced totally and completely.
And even our counting system exhibits spiral causation (circular
causation is just one aspect of spiral causation) aspects totally not yet
understood.
As you said before an almost
impossible task, I couldn’t agree more with
you.

Spiral causation isn’t a bad notion, since every time around, the world
is a little different. But in PCT everything in the loop is happening at
the same time, and the beginning of the loop, wherever you start, is also
its end.

Don’t put mathematics down just because you don’t understand it.

I don’t worry about “impossible.” Everything is
impossible until you figure out how to do it. It just takes time (maybe
more than I have, but someone else will do the rest).

Best,

Bill P.

[From Bill Powers (2010.08.07.0210 MDT)]

Martin Taylor 2010.08.06.23.09 –

RM: Yes, I like your changes to
the names of the perceptual signals. I don’t really understand why you
want to call the imagined response signal “A” and the
perceived response signal “p[R]”.

MMT: Because A and R are of completely different character, as I have
pointed out over and over again.

BP: Please explain what is different between A and R, don’t just keep
saying they are different.

Rick is perfectly correct in pointing out that the imagined A gets
switched to

be a reference level for the perceived R, which shows that A and R must
be equal. The output function at the higher level emits a signal O(e)
which, when connected to the input function, you call A, but when
connected to the lower order comparator you call r(R). I learned that
things equal to the same thing are equal to each other, so A = r(R). If
there is a difference in the signal that results from throwing the
switch, some mechanism needs to be defined and shown in the diagram to
make that clear. I think that what is really happening is that you are
perceiving something different when the switch is thrown: the difference
is in you, not the model.

MMT: You keep wanting to call
them the same thing, because in your model they are. In my model they are
not. If you wanted to draw a complete circuit diagram for the model, you
would have a lot more output stages between A and R than the single one
shown in both our models. Just as in a tracking experiment where the
conventional diagram shows just a simple line between the output of the
control system controlling p(Cursor-Target), instead of a whole complex
of control systems to the top of which the C-T control system sends a
reference signal, so in our models we make one simple control loop take
the place of many whose end result is to produce R.

BP: Are you just trying to say that R is a variable of lower order than
A? If so, you must actually insert the intervening level, or the diagram
becomes inexplicable. The way you have drawn it, A and r[R] are the same
variable. A = O(e) and r[R] = O(e); therefore A = r[R]. If that’s not
what you meant to draw, then draw what you meant. I find that when I have
to do that, it often happens that I have to change my mind about what I
meant.

RM: After all, what you call
“A” is functionally just the reference signal for p[R], but
with the imagination “switch” thrown so that this signal
directly enters the perceptual function that perceives the relationship
between the stimulus and response (as perceived).

MMT: Not quite. A is a variable but r[R] is its value at some moment –
usually the moment when the p[p[S] - A] control system has arrived at
zero error.

BP: That makes no sense. If you throw the switch, r[R] starts to covary
with O(e), whereas before, A covaried with O(e). Have you thought about
what happens to the relationship control loop while the switch is thrown
to the reference-signal side? The A perceptual input disappears, which
instantly alters the perception of the relationship, creating a very
large error. The result will be that O(e) and thus r(R) quickly becomes
very different from the former value of A. This is a very common sort of
design error in system designs – forgetting to take care of undefined
conditions.

RM: I also don’t agree with
calling p[p[S]-A] a controlled variable.

MMT: Of course it is a controlled variable. Just as with any control
system, the output signal influences the perceptual signal, and changes
its value to approach its reference value. What’s not a controlled
variable about that?

RM: Bill addressed this in his
comments earlier. A controlled variable (in PCT) has an environmental
correlate, the controlled quantity, which can be detected by an
experimenter.

MMT: The controlled quantity is ALWAYS a perception, never an
environmental variable. That’s PCT 101, first lesson.

BP: Here we go again. Rick is defining a controlled variable as something
the external observer can see. If you would read his words that would be
obvious. To Rick, the internal variable is a controlled signal but
it is not what he calls “the CV.” Maybe it would help if both
of you would say CQ when you’re referring to the visible environmental
controlled quantity, and perceptual signal or controlled perception (CP)
when you mean the signal inside the system. I introduced the distinction
between a physical quantity (a variable outside the system) and a signal
(a variable inside the system) for the exact purpose of avoiding this
kind of confusion.

When you say “Of course it is a controlled variable” you’re
showing that at the moment you’re simply not conscious of the other
person’s different usage of the same term. That should embarrass the
inventor of layered protocols.

MMT: The way you use “(in
PCT)” strongly suggests that “PCT” is what some authority
says it is, like a religion, rather than what analysis and experiment
discover it to be, like a science. I don’t go along with that. And in any
case, I reiterate my reminder to you that it is “PERCEPTUAL Control
Theory”, not “Environmental Variable Control Theory”.

RM: There is no environmental
correlate of p[pS]-A] (because A exists only in the subject’s
imagination) so it’s really not a controlled variable.

MMT: The part before “so…” is true; the part after
“…so” is not.

BP: Not according to your usage of “controlled variable.” It is
true under Rick’s usage of the same term.

MT: I don’t care how you label in your model, but I do want the overt
response variable in my model to have a DIFFERENT label from the answer
variable.

WHY?

Especially since they have the same numerical value (at the instant just
after the switch has been thrown).

I don’t see how you can go on until these problems have been
fixed.

Best,

Bill P.