PCT researcher who doesn't talk

[Martin Taylor 2009.02.20.17.20]

[From Rick Marken (2009.02.20.1220)]

Bill Powers (2009.02.20.1100 MST)–

If you really don’t think you’re going to
change anyone’s mind, just what is the objective of your attacks?

I wonder what you consider to be attacks. Was it when I said to Martin
“Bill is right and you are wrong”?

I certainly didn’t find that to be an attack. I did find it a
disturbance to a belief I like to hold: that Rick Marken is a serious
scientist. You didn’t explain in what way Bill was right and I was
wrong (which might have helped clear away some misunderstanding you had
observed). Furthermore, you said this even AFTER Bill agreed that we
were BOTH right. Presumably, since at that time Bill and I had
reconciled our differences, you must have meant that Bill is right
because he is Bill, and I am wrong because I am Martin.

This all started when I asked Martin to comment/critique my
“Revolution” paper. I suspected that he would disagree with the
fundamental premise and, indeed, he did.

No I didn’t. I said it was a fine paper, on several occasions, and in
my initial comments I offered a few suggestions toward making your
point more powerfully. I even used your own criterion for determining
when one can use data from conventional experiments to discover
something useful about an individual. You asked what I meant by one
comment, and I explained – obviously in a way that was misunderstood.
When you and Bill thought I was saying something other than what I
meant, I explained in more detail. Bill followed up, you didn’t, other
than by repeating over and again that I was saying things that I had
told you I was not.

I didn’t feel like I was attacking Martin any more than I
thought he was attacking me (by saying things like I was only dealing
with what I imagined he was saying). I know Martin and I wouldn’t
engage him in debate if I didn’t think he could handle himself just
fine.

You didn’t engage me in debate except once: [Rick Marken
(2009.02.18.0820)], which I answered in some detail. I repeatedly asked
you to, but you didn’t. Your messages after my response to your single
serious message indicated that you had not read that response any more
than you had read any others of my messages in the thread. You assumed
that I MUST find your paper a disturbance, and therefore you need not
read anything that might disturb your strongly held belief. You may
perhaps be interested to learn that I don’t usually like to play
fantasy games, which is what you seem to me to have been doing
recently, so that was indeed a bit of a disturbance.

Bill has debated with me. You have not. Bill questioned details,
corrected my mistakes, accepted my suggestions when they seemed to him
to be improvements, and eventually we reached what I believe to be a
mutual understanding. You, on the other hand, even went so far as to
say explicitly that you don’t need to read what I write, because you
know it is going to be wrong.

Until you said you didn’t read my messages before deciding they are
wrong, I had been wondering why (apart from that once) none of your
messages commented on errors you presumably thought I had made in my
explanations. I wondered why you didn’t refer to the control-loop
diagrams I posted. All your messages on the thread have simply been
assertions of my ignorance of PCT, with never a suggestion as to which
of the facets of my PCT-based analysis has been wrong. You have, many
times in the “PCT Research and Statistics” thread, commented in detail
on things I never wrote, to the extent that I was contemplating phoning
Bill to ask whether he knew of anything that might be bothering you
that could lead to this kind of strange behaviour (one of my controlled
perceptions is that if I can do something to help someone in trouble, I
try to do it – I like you, and I perceive you to be troubled, but I
don’t know in what way or whether I can help).

I know Martin and I know he can take care of himself and I’m
pretty sure that what I said to him didn’t hurt.

Actually, it did hurt, because I hate the feeling that someone I like
seems to be sick, and there’s nothing I can do about it. Helplessness,
the inability to influence a controlled perception, is an unpleasant
feeling. You didn’t offend me. You just puzzled me, and left me feeling
in the end that the most useful thing I could do might be to ignore the
silly things you were writing. I don’t know whether you read my message
[Martin Taylor 2009.02.19.10.31] in which I said: "
I think this finishes this particular thread. I have exhausted all
convenient means at my disposal for controlling my perception that you
understand what I am saying. All of them have the same result –
nothing, nil, nada (to quote you)." But I mis-stated: What I meant was
that I considered the thread finished as far as direct comment on your
messages was concerned, unless you wrote something relevant to previous
messages in the thread. I did not mean I was no longer interested in
serious discussion on the topic.

I did not do anything to intentionally hurt anyone. I would bet Martin
was not hurt at all. If he was I aplogize. He was simply making has
case for conventional research, a case I knew he’d make, and be happy
to do it. I thought it was “fun”, not because Martin would be hurt but,
rather, because, in his arguments, I knew Martin would be revealing the
problems I have had trying to explain the PCT perspective to my
research psychologist peers (problems I described in the “Revolution”
paper itself). These arguments provided a nice basis for discussing
these issues, and we did get a start at discussing them.

When?

You reveal quite a bit when you say: “He was simply making has case for
conventional research, a case I knew he’d make, and be happy to do it.”
That, in itself tells how much you have read in that thread. The case I
made, and will continue to make, are that there are circumstances in
which results obtained by conventional methods are nevertheless useful
for studying some properties of living control systems. That’s a VERY
different thing.

I am sorry that you find yourself incapable of arguing the matter
coherently, and I hope you get better soon.

Martin

[From Rick Marken (2009.02.20.1740)]

Martin Taylor (2009.02.20.17.20) -

I certainly didn't find that to be an attack. I did find it a disturbance to
a belief I like to hold: that Rick Marken is a serious scientist.

That's fine. I'd rather be disappointing than mean;-)

You didn't explain in what way Bill was right and I was wrong

I was trying to let Bill handle the discussion; I didn't want to
interfere. But I'll just say now that what I thought Bill was right
about was that you were talking about outputs not having a feedback
effect on presentations as though this implied an open loop
relationship between presentations (by which I assumed you meant
environmental events, S) and outputs (R); but outputs never affect
presentations (R doesn;t have feedback effects on S in a closed loop);
they affect controlled variables, V, (which are disturbed by
presentations) as Bill said.

>This all started when I asked Martin to comment/critique my "Revolution"
>paper. I suspected that he would disagree with the fundamental premise and,

indeed, he did.

No I didn't. I said it was a fine paper, on several occasions, and in my
initial comments I offered a few suggestions toward making your point more
powerfully. I even used your own criterion for determining when one can use
data from conventional experiments to discover something useful about an
individual.

This is why it's difficult for me to talk with you about this stuff.
It seems like we are talking past one another (as Bill noted). You may
have praised the paper (thanks) but you were disagreeing with its
fundamental premise, perhaps without knowing it, which is that you can
never discover anything useful about individuals (closed loop systems)
using conventional methods. There is no "criterion for determining
when one can use data from conventional experiments to discover
something useful about an individual" in my paper. I believe that what
you took as such a criterion (that there be an open loop relationship
between IV and DV) is what I was saying is never true in conventional
experiments.

You didn't engage me in debate except once: [Rick Marken (2009.02.18.0820)],
which I answered in some detail.

Yes, I'm sorry. I was trying to let Bill handle it but sometimes I
just couldn't help but chime in.

You assumed that I MUST find your paper a
disturbance, and therefore you need not read anything that might disturb
your strongly held belief. You may perhaps be interested to learn that I
don't usually like to play fantasy games, which is what you seem to me to
have been doing recently, so that was indeed a bit of a disturbance.

I understand how you feel. It's actually fine with me if you want to
assume that I'm just fantasizing. I think the problem is that we have
such a fundamental disagreement that the only resolution is just to
assume that we are both making this stuff up;-)

Bill has debated with me. You have not. Bill questioned details, corrected
my mistakes, accepted my suggestions when they seemed to him to be
improvements, and eventually we reached what I believe to be a mutual
understanding.

I know. Bill is good at giving that impression. I stink at it.

You, on the other hand, even went so far as to say explicitly
that you don't need to read what I write, because you know it is going to be
wrong.

That's not quite what I meant. What I said was that I treat the
demonstrations of control phenomena developed by Bill (and to a lesser
extent by myself) as demonstrations of principle. Bill demonstrated
that the kind of relationship between IV and DV seen in conventional
experiments is what would be expected from a closed loop system
protecting a CV from disturbance by the IV. The relationship that is
seem is the inverse of the feedback connection between DV and CV. This
is true if the system under study is a control system. And if the
system under study is a control system then are no open-loop
connections from any IV to any DV. This is true in principle when you
are dealing with a closed loop control system. So if people are
control systems (which I think they are) then there is simply no such
thing as a conventional psychological experiment where the
relationship between any IV (environmental variable) and any DV tells
us anything useful about the organism. Bill was starting to dissect
your experiment in detail to show why this is the case in your
specific example, until he got diverted into dissecting me. But if you
want to continue going over the details of your experiment with me
that would be fine; it's not that I don't want to hear about it; it's
just that I already know, form your description of the experiment as
well as from the demonstrations of principle in Bill's _Psych Review_
paper, that the observed relationships between variables that you
think are "useful" must be some kind of a response to a disturbance to
a CV and, thus, an example of the "behavioral illusion" if the
subjects in the experiment were control systems (people).

All your messages on the thread have simply been assertions of my ignorance of
PCT, with never a suggestion as to which of the facets of my PCT-based
analysis has been wrong.

Sorry, you are defintely not ignorant of PCT; you know it better than
almost anyone. It's just the implications of PCT for conventional
research that (I think) you don't quite get.

I know Martin and I know he can take care of himself and I'm pretty sure
that what I said to him didn't hurt.

Actually, it did hurt, because I hate the feeling that someone I like seems
to be sick, and there's nothing I can do about it.

Again, I'm sorry that I hurt you in that way but clearly my apparent
sickness (or cluelessness) is unintentional.

Helplessness, the
inability to influence a controlled perception, is an unpleasant feeling.

Yes, that sounds right. You can't get me to see it your way. It must
be very frustrating. For what it's worth, I have the same feelings
very often myself. I deal with it by trying to not control for the
other person getting it.

You didn't offend me. You just puzzled me

OK, I'm glad I didn't offend. I figured.

You reveal quite a bit when you say: "He was simply making has case for
conventional research, a case I knew he'd make, and be happy to do it."
That, in itself tells how much you have read in that thread. The case I
made, and will continue to make, are that there are circumstances in which
results obtained by conventional methods are nevertheless useful for
studying some properties of living control systems. That's a VERY different
thing.

Yes, I know that's the case you were making. And I disagree with you
completely. But I still like you;-)

I am sorry that you find yourself incapable of arguing the matter
coherently, and I hope you get better soon.

I'm willing to keep working on it; after all, it's been the main theme
of my work in PCT (that PCT invalidates the conventional methods used
to study behavior and the mind). But don't worry about me; I may be
incapable of understanding you but I don't think I'm sick (well, not
in that way;-))

Thanks for the nice post.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009.02.20.1946 MST)]

Rick Marken (2009.02.20.1740) --

[To Martin Taylor] ... I'll just say now that what I thought Bill was right

about was that you were talking about outputs not having a feedback
effect on presentations as though this implied an open loop
relationship between presentations (by which I assumed you meant
environmental events, S) and outputs (R); but outputs never affect
presentations (R doesn;t have feedback effects on S in a closed loop);
they affect controlled variables, V, (which are disturbed by
presentations) as Bill said.

Yes, Martin emphasized that the action does not affect the
presentation. That loop is not closed, right? It could be closed if
the action did affect the presentation, which isn't impossible, but
it doesn't, so that potential loop is not closed -- which means it's
open, as we use the term. There IS an open-loop relationship between
S and R, isn't there? Or are you saying that you have to have a
closed-loop relationship and open it before you can call it
open-loop? Your rules for applying these terms are not at all clear.
And I think you will see below that open-loop can mean more than one thing.

All this nitpicking about what Martin does or doesn't understand is
irrelevant, unfair, and probably grossly inaccurate, not to mention
uninteresting. What is interesting is the phenomenon he is
describing, or hinting at. Neither he nor you has talked about it very much,

A person is made to decide at a certain time which of two lights has
just come on so as to pick a button to press, turning the light off.
If the decision-time is set too soon after the button is pressed, the
subject is wrong half of the time (I'm guessing about that but it
seems to fit the description). At some quite definite time after the
light has come on, a forced guess starts to be biased in the right
direction, and is correct more and more often as the delay increases
beyond the minimum time. Is that a correct description, Martin?

I have had a computer crash and consequently don't have Martin's data
plot handy, but as I recall it, the threshold delay was something
like 300 milliseconds after the onset of the light. This is about
twice as long as the transport lag we measure in a tracking
experiment, a hint, perhaps, that we are looking at the operation of
a higher-order system. But knowing nothing about the experimental
conditions or what it is like to do that task, I'll just say it's
some system that just begins to detect which light is on some time
before the 300 millisecond mark. I'm assuming that the warning beeps
effectively allow the person to synchronize the time of the press
with the third beep, and therefore that the identification takes
place some time shortly before the third beep. If the data were
recorded in the proper way, it should be possible to tell how good
this synchronization was, but there is no way to tell exactly when
the identification was made, because there will be a delay between
that moment and the time when the contacts under the button actually close.

Incidentally, during that delay time, the system is literally
open-loop: there is no change in the feedback for something near
three tenths of a second, even in the control loop that operates the button.

The rapid increase in the correct guesses right after the threshold
time follows a straight line which I suspect is the initial rise of
an exponential (-like) approach to asmptote; Martin said that for
long enough delays, identification becomes almost perfect. I am
imagining a process that produces a perceptual signal that rises
above the system noise level until it reaches some threshold of
detectability. When sampled early in the rise-time this signal is
small, showing up as a bias on the system noise that occasionally
reaches the detection threshold. This could happen in the detector
for either light, so the result is simply a slight excess of correct
responses over errors. For longer delays, this signal has had more
time to rise, so the bias is larger and the percentage of correct
guesses increases. At some point the false positives essentially die
out and only the correct detector responds enough to be perceived.
From the slope of the straight lines in the plot, I would guess that
the rise time is fairly fast, having a time constant in the tens of
milliseconds. Of course I don't know how the y-dimension is being
calculated and we have no raw data, so these estimates are pretty fuzzy.

This is indeed a psychophysical measurement, but of quite a different
kind from those involved in magnitude estimates. It suggests that
there is one level which contains detectors for the position of each
light, and a level above it which decides which perceptual signal is
larger than the other. The higher system is apparently adjusted to
look deep into the system noise for the first hint of a difference;
that would be an effect, I suppose, of stressing that the response
must be made at the exact time of the third beep when, some of the
time, the light-detectors have barely had a chance to register the
position of the light. We can suppose that the higher-level system
always picks the largest perceptual signal, but that the system noise
makes the signals fluctuate enough to give many false indications
when the judgment must be made with high resolution early in the
rise-time of the position signals. I can almost see how to make a
model of this process.

Of course my approach to this problem makes no use of
information-theoretic concepts; I think in circuits and physical
systems that actually do things. I'm sure the circuits could be
analyzed in terms of information theory, but I wouldn't be interested
in that. I would probably become inerested if given a peanut M&M
every time I said something nice about information theory. Plain M&Ms
have no power over me.

The setup for this experiment wouldn't occur very often in normal
life. What's interesting to me is that this experiment provides a
possible probe into the levels of organization, quite similar to
Marken's ingenious tasks that show different temporal characteristics
for perceptions of different levels of variables. The idea of forcing
an observation to take place at a specified time is an clever way of
sneaking in between the time at which an input is perceived and the
time at which something is done about it. Time rather than a wire
cutter is used to break the loop and isolate the input effect before
feedback can modify it. I think this is a legitimate technique.

Best,

Bill P.

Internal Virus Database is out of date.
Checked by AVG - http://www.avg.com
Version: 8.0.176 / Virus Database: 270.10.9/1900 - Release Date: 1/18/2009 12:11 PM

[From Rick Marken (2009.02.21.0830)]

Bill Powers (2009.02.20.1946 MST)--

I probably won't be able to say much about this in detail until
tomorrow night -- very busy weekend -- but l think you make some great
points. I'll just make a few quick comments:

Yes, Martin emphasized that the action does not affect the presentation.
That loop is not closed, right?

That's right. In fact the loop is not closed with respect to _any_ of
the disturbances (bips or light onset). So all "presentations" are
open loop with respect to button presses (output). For those actually
reading this stuff here's what Bill means:

    S ---->V----> O----->R
             ^ |

···

                 >

              -----------------

The path from S (presentation) to R (action) is open loop in a control
loop; the closed loop is with respect to the controlled variable, V.

I think your analysis of the experiment in terms of the timing
revealing something about the levels of perceptual variable being
controlled is very clever. But your most astute observations is surely
this one:

The setup for this experiment wouldn't occur very often in normal life.
What's interesting to me is that this experiment provides a possible probe
into the levels of organization, quite similar to Marken's ingenious tasks
that show different temporal characteristics for perceptions of different
levels of variables.

Yes, they are ingenious, aren't they? They were never published
because Martin wrote a negative review of the paper in which those
experiments were described (way back in 1994 or so). Could some of my
behavior toward Martin be accounted for by the grudge I still hold
against him for this? Nahhhh;-)

Best

RIck
--
Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2009.02.21.13.07]

[From Rick Marken (2009.02.21.0830)]
The setup for this experiment wouldn't occur very often in normal life.
What's interesting to me is that this experiment provides a possible probe
into the levels of organization, quite similar to Marken's ingenious tasks
that show different temporal characteristics for perceptions of different
levels of variables.
Yes, they are ingenious, aren't they? They were never published
because Martin wrote a negative review of the paper in which those
experiments were described (way back in 1994 or so). Could some of my
behavior toward Martin be accounted for by the grudge I still hold
against him for this? Nahhhh;-)

I was surprised by this comment, as I didn’t remember ever having been
asked by a journal to review a Marken paper. So I searched my archives
and did find a set of comments on the paper in question, though they
were not written for a journal. It was a review that Rick asked me to
do, and it was sent to him. I don’t know whether that was done
privately or over what was then called “CSG-L”. I can’t see how such
comments, written for and to the author, could have prevented
publication of the paper.
Anyway, reading my comments 17 1/2 years later, I cannot see how Rick
can characterize them as “negative”. They consists largely of comments
on specific elements, with suggestions as to how Rick could make his
points more convincingly, and avoid statements that might induce
ill-informed criticism from the intended audience. Here’s an example
(the first specific comment):
From a didactic point of view, you start off rather abruptly with
the statement of the control-theoretic position: “Control is the means
by which organisms keep perceived aspects of their external environment
in desired states”, and “From the actor’s perspective, control is a
perceptual phenomenon”. After some immersion in the CSG discussion,
these statements are self-evident. But I think they are not
self-evident to the intended audience, and I think the reader’s
initiation into your thinking might be eased if you were to split the
first paragraph at the quoted sentence, and devote one or two
paragraphs to examples of it. You might use the car driver controlling
the view of the road, and the hungry person controlling the perception
of food in the stomach. Then follow with the bit about the view of
control from the actor’s or the observer’s perspective, as a new
section.

In most of the comments, I said that from my knowledge of PCT at that
time, what Rick wrote seemed both correct and likely not to convince a
reader unfamiliar with PCT. I did tell him that because I felt that the
way he put things would not convince someone not already convinced, I
would not have recommended publication, if I had been asked to referee
the paper.

I couldn’t find a copy of Rick’s paper to see whether I would now make
the same comments as I did then. Probably some comments would change,
but I suspect not by very much. On balance, however, after reading my
comments at this remove, I am amazed at how little has changed over the
years.

Rereading my old comments did not reduce my surprise at Rick’s
statement above – it increased it.

Martin

[Martin Taylor 2009.02.21.13.49]

[From Bill Powers (2009.02.20.1946 MST)]

[To Rick Marken (2009.02.20.1740)] --

What is interesting is the phenomenon he is describing, or hinting at.
Neither he nor you has talked about it very much,

A person is made to decide at a certain time which of two lights has
just come on so as to pick a button to press, turning the light off. If
the decision-time is set too soon after the button is pressed, the
subject is wrong half of the time (I’m guessing about that but it seems
to fit the description). At some quite definite time after the light
has come on, a forced guess starts to be biased in the right direction,
and is correct more and more often as the delay increases beyond the
minimum time. Is that a correct description, Martin?

Yes. Since you lost the original curve, I am including it here.

Schouten.jpg

I have had a computer crash and consequently don’t have
Martin’s data plot handy, but as I recall it, the threshold delay was
something like 300 milliseconds after the onset of the light. This is
about twice as long as the transport lag we measure in a tracking
experiment, a hint, perhaps, that we are looking at the operation of a
higher-order system.

My hypothesis is that the perception in question is at the category
level. In the diagram we developed, it is at the category level that
the “answer” is matched to the “presentation”. Apart from the transport
lag, which seems to be about 230 msec for subject B, variation in the
timing would then refer to the moment when the “intended answer”
reference signal is provided to the button-selection control loop.

But knowing nothing about the experimental conditions or
what it is like to do that task, I’ll just say it’s some system that
just begins to detect which light is on some time before the 300
millisecond mark. I’m assuming that the warning beeps effectively allow
the person to synchronize the time of the press with the third beep,
and therefore that the identification takes place some time shortly
before the third beep. If the data were recorded in the proper way, it
should be possible to tell how good this synchronization was, but there
is no way to tell exactly when the identification was made, because
there will be a delay between that moment and the time when the
contacts under the button actually close.

I don’t think this kind of error, which you correctly observe must
exist, can be very great, because the data points fall remarkably close
to the straight line for subject B, and after the early curve at the
bottom that is probably due to different transport lags for different
subject, the same is true for the combined curve. If there were much
variation in the button-push times gathered into the data for one
point, I think that would show up as quite visible left-right
deviations from the line.

Incidentally, during that delay time, the system is literally
open-loop: there is no change in the feedback for something near three
tenths of a second, even in the control loop that operates the button.

The rapid increase in the correct guesses right after the threshold
time follows a straight line which I suspect is the initial rise of an
exponential (-like) approach to asmptote; Martin said that for long
enough delays, identification becomes almost perfect.

Those are two different things. If the curve is indeed the initial rise
of an exponential-like approach to asymptote, that asymptote must be a
very long way beyond the range investigated in the data. I am with you
in suspecting that it is, but we have no evidence on the question. When
I say “identification becomes almost perfect”, again we can use
intuition to ask how likely is it that after long inspection a subject
would be unable to see which light was on, but there’s no data beyond
d’ a bit less than three (where the probability of a wrong response is
roughly 3 per thousand responses). “Perfection” would be zero errors in
an infinite number of responses, but 3/1000 is not bad, and 1/10,000 is
experimentally indistinguishable from perfection – at least until the
subject does make a mistake, at which point the experimenter probably
might guess the subject had blinked or hadn’t looked, rather than that
the subject had been unable to tell the difference between the lights.

I am imagining a process that produces a perceptual signal
that rises above the system noise level until it reaches some threshold
of detectability. When sampled early in the rise-time this signal is
small, showing up as a bias on the system noise that occasionally
reaches the detection threshold. This could happen in the detector for
either light, so the result is simply a slight excess of correct
responses over errors. For longer delays, this signal has had more time
to rise, so the bias is larger and the percentage of correct guesses
increases. At some point the false positives essentially die out and
only the correct detector responds enough to be perceived. From the
slope of the straight lines in the plot, I would guess that the rise
time is fairly fast, having a time constant in the tens of
milliseconds. Of course I don’t know how the y-dimension is being
calculated and we have no raw data, so these estimates are pretty
fuzzy.

The y dimension is d’^2. It is a one-argument function of the
probability of a correct response, usually looked up in tables, but you
can find it using the concepts I will explain when I get back to my
Bayesian notes.

This is indeed a psychophysical measurement, but of quite a different
kind from those involved in magnitude estimates. It suggests that there
is one level which contains detectors for the position of each light,
and a level above it which decides which perceptual signal is larger
than the other.

I’d call that a category-level perception, but I’m quite happy to defer
if you think it is something else.

The higher system is apparently adjusted to look deep into
the system noise for the first hint of a difference; that would be an
effect, I suppose, of stressing that the response must be made at the
exact time of the third beep when, some of the time, the
light-detectors have barely had a chance to register the position of
the light. We can suppose that the higher-level system always picks the
largest perceptual signal, but that the system noise makes the signals
fluctuate enough to give many false indications when the judgment must
be made with high resolution early in the rise-time of the position
signals. I can almost see how to make a model of this process.

Yes, that’s the description usually used when modelling the process.
The basic idea is what you explain above: the larger signal is chosen,
but the system noise generates distributions of the perceived signal
size as a function of the presented signal size, so sometimes the
larger perceived signal does not correspond to the correct answer.

Of course my approach to this problem makes no use of
information-theoretic concepts; I think in circuits and physical
systems that actually do things. I’m sure the circuits could be
analyzed in terms of information theory, but I wouldn’t be interested
in that.

We all have different interests, and a different bag of tools to help
us to control perceptions related to those interests. I’d like to be
able to discuss with you the information-theoretic aspects of control,
because I think that useful insights might emerge from those
discussions. But there are other participants on CSGnet, and others not
on CSGnet who might be interested, and personally I do think its one
tool that can help in understanding perceptual control.

Actually, I think you do use information-theoretic concepts, but you do
it in the same way that a non-engineer makes a bridge by placing a
plank across a gap, whereas the engineer calculates the stresses
numerically, and crosses bigger gaps.

The setup for this experiment wouldn’t occur very often in
normal life.

True, but it’s not unknown that you have to make a snap decision on
action without having time to gather all the evidence. In the defence
context in which I spent all my working life, that’s a basic fact of
life. Granted, the military decisions are at a rather higher level of
perception, but the principle is the same thing – control in
imagination for as long as you can afford to, and then act on the bet
imagined match between what the evidence suggests and what you want to
achieve. In more everyday life, deadlines do occur, and sometimes the
problem in meeting them is that you haven’t yet come up with as much
relevant data as you would like.

Apart from this, the setup in my canonical diagram would also occur
pretty often in ordinary life. It’s called “answering a question”. I’ll
repost that here, too, but with an appropriate modifications to take it
out of the realm of experiments. There are lots of control loops and
transformations omitted, including one through the questioner that
ensures that the answer is understood, and another through the
questioner at a higher level, to make sure the responder understands
the question. But it should give the general idea.

question-answer.jpg

What’s interesting to me is that this experiment provides
a possible probe into the levels of organization, quite similar to
Marken’s ingenious tasks that show different temporal characteristics
for perceptions of different levels of variables.

That’s a wonderful idea. But do you think it could work below the
category level?

The idea of forcing an observation to take place at a
specified time is an clever way of sneaking in between the time at
which an input is perceived and the time at which something is done
about it. Time rather than a wire cutter is used to break the loop and
isolate the input effect before feedback can modify it. I think this is
a legitimate technique.

Good analogy.

Martin

[From Bill Powers (2009.02.21.1617 mst)]

Martin Taylor 2009.02.21.13.49–

Yes. Since you lost the original
curve, I am including it here.

1b8d65e.jpg
It looks as
if 200 to 230 milliseconds is the threshold. In tracking experiments with
continuous disturbances, we measure 7 to 9 frames of delay, or 117 to 150
milliseconds, and with your new step disturbances I am getting a
consistent 15 frames or 250 milliseconds – very consistent with Fig.

My hypothesis is that the
perception in question is at the category level. In the diagram we
developed, it is at the category level that the “answer” is
matched to the “presentation”. Apart from the transport lag,
which seems to be about 230 msec for subject B, variation in the timing
would then refer to the moment when the “intended answer”
reference signal is provided to the button-selection control
loop.

I have been leaning toward the same conclusion, though I don’t have any
justification for thinking that just making the disturbance discontinuous
is enough to require category-level control. Possibly
“category” is the wrong term – the important aspect may be the
introduction of discrete variables or symbolic control – that is,
control through use of tokens or symbols rather than continuous varibles.
We have to do an awful of guessing here, which makes me uncomfortable.

I don’t think this kind of
error, which you correctly observe must exist, can be very great, because
the data points fall remarkably close to the straight line for subject B,
and after the early curve at the bottom that is probably due to different
transport lags for different subject, the same is true for the combined
curve.

Actually, the sum of a series of straight-line plots is still a
straight-line plot, isn’t it? Anyway, if the time between identification
of the light and the final contact closure is constant, the straight line
adjusted for movement time would simply be translated sideways on the
plot. I was imagining doing this experiment, and it seemed to me that
between the first beep and the third, the subject has to do quite a lot
of fast work. First, the initial beep says that you need to sample the
state of the light in the next 500 milliseconds and leave enough time to
move your hand in the right direction from where it is hovering to touch
the button, then increase the pressure so the button is depressed just as
the third beep occurs. You have half a second from the first beep to do
all that; the second been may actually have to be the signal to move the
hand – it could take close to a quarter of a second to make the movement
and apply enough force to depress the button. I think it would be a very
tight squeeze to judge which light is on, then move the hand in the
correct direction and press the button, all in only half a second. And
remember, with a threshold time of 200 to 250 milliseconds after onset,
the first beep has to occur well before a light turns on. It’s possible
that the actual perceptual judgment is made more like 150 milliseconds
after onset, or even sooner.

I think a series of reaction-time experiments is needed here to determine
how long after light-onset the actual perceptual sample is taken and what
the lag time is between the first reaction to the light and the press of
the button. I doubt that that lag is much different from 150
milliseconds.

If there were much variation in
the button-push times gathered into the data for one point, I think that
would show up as quite visible left-right deviations from the
line.

As I said above, however, if the button-pushing move takes a fairly
constant amount of time, the shape of the curve would not be
changed.

If the curve is indeed the
initial rise of an exponential-like approach to asymptote, that asymptote
must be a very long way beyond the range investigated in the
data.

I don’t think so. Remember that the quantity actually plotted is not the
perceptual signal, but a very compressed function of its magnitude
relative to the noise level. With d’^2 = 9, you say the probability of an
error is 3 in 1000, which is pretty close to asymptote on a linear scale
and is just above the highest recorded point at d^2 = 8. If you converted
the y axis to probability of error ( minus 0.5, I assume), I think you
would see the 100%-right line being approached pretty fast, and the
signal itself could curve quite sharply. I suppose we need to compute
those numbers – you know what the functions are, why not just try it and
see? The probabiliy of a correct indication would be a very steep
function of the ratio of signal amplitude to the standard deviation of
noise. Here;s that page from my old Chemical Rubber Handbook:

This gives the idea: as the signal becomes larger relative to the noise,
the probability of a chance fluctuation being that large drops by a huge
amount with every standard deviation.

Best,

Bill P.