Feedback reciprocity (was Illusion and Loops (was Beyond the Fringe))

[David Goldstein (2010.07.05.0810)]

Some thoughts I have about the discussion as I was listening in:

(1) Why is the emphasis on research methods important? Doesn't PCT teach that people use all and any means available to control their perceptions?

(2) I saw how Bill Powers approached the topic of Synesthesia in an interesting case study which I presented at the last CSG meeting in Cherry Hill, NJ. The research method was designed to reveal as much as possible what the person was experiencing. I learned that the person had normal color vision as shown by a color matching task that did not involve verbal abilities. I learned how the person perceived each number, which appeared black to most people, but appeared colored to her. The key to the method was to ask the right questions and to design the study so it showed what the person was experiencing. In the discussion that took place at the meeting, the questioning that Bill did was very much like he would do in MOL Therapy.

(3) I understand that "the behavioral illusion" plays an important role in why Rick emphasizes "closed-loop" methods. But the way Bill approached the topic of Synesthesia did not seem strange or different than "conventional" approaches except for the emphasize on setting things up so that the experimenter could experience what was going on in the subject as much as possible.

(4) With respect to the topic of magnitide estimation, I remember that many years ago, my younger brother was a psychology major before he went to medical school. He did a project which compared two method of magnitude estimation. One involved verbal estimatioin. The other involved matching and adjustment, much closer to the method that Bill used with the Synesthesia case. The results were not the same.

···

----- Original Message ----- From: "Martin Taylor" <mmt-csg@MMTAYLOR.NET>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Sunday, July 04, 2010 11:25 PM
Subject: Re: Feedback reciprocity (was Illusion and Loops (was Beyond the Fringe))

[Martin Taylor 2010.07.04.17.02]

[From Rick Marken (2010.07.04.0840)]

  Martin Taylor (2010.07.04.10.15)_-
As a passing comment, very little of psychophysics is concerned with
magnitude estimation. Most psychophysics has to do with the capabilities of
the processing channels, such as timing effects, noise limits,
cross-interference, cross-support, and the like.

Yes, but since all psychophysical research is based on a an open-loop
causal model of the systems under study, those who study psychophysics
(which used to include me;-) can't really be learning much about the
"capabilities of the processing channels" of these systems using
conventional methods. Agreed?

No. Far from it. Fifteen or twenty years ago, when I was just getting my head into the PCT space, I might have agreed, but now that I understand PCT in a pretty deep sense, I don't. In fact, the more I think about it from a PCT perspective, the less I agree with your statement.

Martin

[From Rick Marken (2010.07.04.0750)]

�Martin Taylor (2010.07.04.17.02)--

Rick Marken (2010.07.04.0840)]

Yes, but since all psychophysical research is based on a an open-loop
causal model of the systems under study, those who study psychophysics
(which used to include me;-) can't really be learning much about the
"capabilities of the processing channels" of these systems using
conventional methods. Agreed?

No. Far from it. Fifteen or twenty years ago, when I was just getting my
head into the PCT space, I might have agreed, but now that I understand PCT
in a pretty deep sense, I don't. In fact, the more I think about it from a
PCT perspective, the less I agree with your statement.

I'd really like to hear why you have come to this conclusion, which is
so different from mine.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.07.05.0840)]

David Goldstein (2010.07.05.0810)--

Some thoughts I have about the discussion as I was listening in:

(1) Why is the emphasis on research methods important? Doesn't PCT teach
that people use all and any means available to control their perceptions?

It's all my doing. I came to PCT as an experimental psychologist and
what was most interesting to me about PCT was its implications for
psychological research. Indeed, the article that really hooked me was
Bill's 1978 Psych Review article, which basically said that if
organisms are closed-loop systems then scientific psychology will have
to start over and be rebuild from the ground up based on a closed loop
model of behavior.

That seemed like kind of a huge deal to me and in a way everything
I've done with PCT from that time has been aimed at testing that claim
or demonstrating that that claim is true. Of course, this has turned
out to be a rather poor career move. Conventional research
psychologists don't like my work because of its implications for their
careers. And it turns out that research psychologists who get
interested in PCT don't like it either for various reasons of their
own. So basically I have managed to get interested in an aspect of PCT
that nobody wants to hear about. So I guess I'll have to keep my day
job;-)

(2) I saw how Bill Powers approached the topic of Synesthesia in an
interesting case study which I presented at the last CSG meeting in Cherry
Hill, NJ. The research method was designed to reveal as much as possible
what the person was experiencing...The key to the method was
to ask the right questions and to design the study so it showed what the
person was experiencing. In the discussion that took place at the meeting,
the questioning that Bill did was very much like he would do in MOL Therapy.

I think MOL-like methods should be a big part of how research is done
in the future.

(3) I understand that "the behavioral illusion" plays an important role in
why Rick emphasizes "closed-loop" methods. But the way Bill approached the
topic of Synesthesia did not seem strange or different than "conventional"
approaches except for the emphasize on setting things up so that the
experimenter could experience what was going on in the subject as much as
possible.

PCT-based research methods are not necessarily very different from
conventional methods. The main difference is simply recognizing that
the subject is carrying out some purposes (controlling some
perceptions), that the IV in the experiment (the variable manipulated
by the experimenter) may be a disturbance to these purposes and that
you can tell what purposes are being carried out by seeing whether the
purpose (controlled variable, which can now be thought of as the DV)
is protected from these disturbances.

PCT methods are aimed at understanding purpose rather than
cause-effect and an important aspect of studying purpose is to try to
see things from the subject's perspective. That may be why it looked
to you like Bill was "setting things up so that [he] could experience
what was going on in the subject as much as possible". He was. In PCT
research, the goal is to understand behavior from the subject's point
of view since behavior is organisms controlling their own perceptions.

(4) With respect to the topic of magnitide estimation, I remember that many
years ago, my younger brother was a psychology major before he went to
medical school. He did a project which compared two method of magnitude
estimation. One involved verbal estimatioin. The other involved matching and
adjustment, much closer to the method that Bill used with the Synesthesia
case. The results were not the same.

My only interest in magnitude estimation is as a _possible_
illustration of the behavioral illusion. This is because magnitude
estimation is one of those rare cases in psychology where the issue is
not just _whether_ an IV has an effect on a DV but _how_ it affects
it. The Power Law says that the effect of the IV on the DV is a power
function: DV = k*IV^p. The Weber-Fechner Law says that the effect of
IV on DV is a log function: DV = k log (IV). The behavioral illusion
is that the observed relationship between IV and DV for a closed-loop
system is the inverse of the feedback function relating DV to CV (the
controlled perceptual variable).

My paper (http://www.mindreadings.com/BehavioralIllusion.pdf) aims to
show that the power function relationship between IV and DV observed
in magnitude estimation experiments _could be_ a behavioral illusion
if the feedback function relating DV (the magnitude estimates) to CV
(relationship between perceived magnitude estimates and perceived
value of the IV) is logarithmic.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.05.0840 MDT)]

David Goldstein (2010.07.05.0810)--

Thanks for the clarifications, David; I think you capture very well the approach I try to take. As you noticed, my idea of MOL therapy is really quite simple: instead of guessing, just keep trying to find out what the person is experiencing. That's my approach to understanding human behavior in general -- I don't know any other way to do it.

DMG: (1) Why is the emphasis on research methods important? Doesn't PCT teach that people use all and any means available to control their perceptions?

(2) I saw how Bill Powers approached the topic of Synesthesia in an interesting case study which I presented at the last CSG meeting in Cherry Hill, NJ. The research method was designed to reveal as much as possible what the person was experiencing. I learned that the person had normal color vision as shown by a color matching task that did not involve verbal abilities. I learned how the person perceived each number, which appeared black to most people, but appeared colored to her. The key to the method was to ask the right questions and to design the study so it showed what the person was experiencing. In the discussion that took place at the meeting, the questioning that Bill did was very much like he would do in MOL Therapy

(3) ... the way Bill approached the topic of Synesthesia did not seem strange or different than "conventional" approaches except for the emphasize on setting things up so that the experimenter could experience what was going on in the subject as much as possible.

BP: That's a pretty big difference, isn't it? As much as possible, I try to arrange things so I don't have to depend on knowing what the subject means by his (her) words. The synesthesia experiment was set up so the subject could simply adjust a color on the screen; we were both starting with the same intensity-level inputs, rather than depending on words like "blue" or "green with a little blue in it." Even in MOL I try to minimize the dependency on words: if the subject says "I was angry" I don't assume I know what "angry" means; I ask more questions about it, trying to find out what sort of experience this is for the client (not that I have "clients"). I think psychologists and most other scientists or practitioners depend far too much on important words that have never actually been linked to specific concepts or experiences. We have to omit detailed checking of most of the words we use, but when it comes to critical terms that have important implications, like "anxiety" or "depression," psychologists just seem to assume that everyone knows what those words mean. Well, they don't. If you've never experienced the state referred to loosely as depression, or explored it with a depressed person until you understand all that goes with it, you haven't a clue. And even if you have been in a state you would call depression, you can't know what another person means by using the same word without doing this exploration. The idea of diagnosing a person's problem by using standardized methods of categorization is simply ludicrous. People custom-tailor their problems, they don't have standardized problems. The only reason they use standardized words to describe their problems is that there aren't enough different words.

I should think that the worst therapists would be the ones who start out thinking they know what's wrong with you on the basis of a standardized test, or just because they think they know what your words mean. Maybe a really good therapist would be one with Asperger's Syndrome, someone who doesn't understand any of the wierd things people do like shaking hands or saying please. Maybe they're closer to the truth than people who take such things for granted.

DMG: (4) With respect to the topic of magnitide estimation, I remember that many years ago, my younger brother was a psychology major before he went to medical school. He did a project which compared two method of magnitude estimation. One involved verbal estimation. The other involved matching and adjustment, much closer to the method that Bill used with the Synesthesia case. The results were not the same.

BP: Hey, your brother ought to learn about PCT! Any chance of seeing more about that project? Could his paper be scanned and OCR'd for us? I assume it's not alreay in electronic form.

The data from psychological experiments are really, in most cases I have seen, like a Rorschach test, scattered in loose random patterns to which you can fit almost any theory and get p < 0.05. Matching and adjustment tests are no exception when you average data across subjects, but within-subject measurements are very accurate and repeatable. It's hard to find any theory but PCT that can explain how a person can carry out a matching task. Which is sort of ironic.

One of the big delusions in psychology is the Myth of the Sprague-Dawley Rat. Maintaining a pure strain of this rat has been deemed extraordinarily important, obviously because it's assumed that all "pure" Sprague-Dawley rats are alike. It follows from that assumption that if your data have a lot of random scatter in them, the scatter must be due to inherent variability in the rat and uncertainty of measurements. Not, heaven forbid, in failure of the hypothesis! Not, perish the thought, in orderly but spontaneous behavior originated by the rat independently of the environment! And most certainly and emphatically not because the rat is acting on its environment to control the experiences it is capable of having.

Don't blame me for the meanderings. It was your post that stimulated them. Not my fault (one of the really convenient ways we can use S-R theory).

Best,

Bill P.

[David Goldstein (2010.07.05.12.48 EDT)]

Bill, Rick, Martin and any others interested. I am attaching an email of the
research I mentioned.

In rereading it, I see that it was not within-subject design as a PCT would
do.

I think it addresses the issue of whether conventional research could
distinguish between perceptual effects from nonperceptual effects. It seems
to.

The physical matching method of responding comes closer to a PCT approach.

David

Jerry Undergraduate Research.PDF (315 KB)

[From Bill Powers (2010.07.05.1007 MDT)]

Rick Marken (2010.07.05.0840) --

RM: The Power Law says that the effect of the IV on the DV is a power
function: DV = k*IV^p. The Weber-Fechner Law says that the effect of
IV on DV is a log function: DV = k log (IV).

BP: When you put them together like that, it's easy to see the difference between Stevens' power law and Weber-Fechner:

Weber-Fechner: DV = k1 log(IV)

Stevens: log(DV)= k2 log(IV)

I really wonder if the data are good enough to tell the difference.

My paper (http://www.mindreadings.com/BehavioralIllusion.pdf) aims to
show that the power function relationship between IV and DV observed
in magnitude estimation experiments _could be_ a behavioral illusion
if the feedback function relating DV (the magnitude estimates) to CV
(relationship between perceived magnitude estimates and perceived
value of the IV) is logarithmic.

The "could be" is the issue -- no need to prove it is. If it could be, then anyone investigating perception has to rule it out before ignoring it. This is generally the case for PCT interpretations as opposed to conventional ones.

Unfortunately (what is turning into) the other side is not going to do the work for us. The only way we will ever establish that the illusion is a real possibility is to do the experiment ourselves and show that indeed the behavioral illusion does exist, by altering the feedback function and showing that the result does, indeed, change accordingly. And it has to be a clean, by-the-book replication (including the original duplicating the original results as well as the new form of the feedback function giving different results, and the right different results).

We are, after all, dealing with scientists even if they are also human in wanting to protect their careers. We have to satisfy the scientist, not the fearful defender behind the crumbling fortifications. We may have to be diplomatic and make excuses for them and avoid gloating in triumph over them and saying I Told You So, but the time is coming soon when we also need to say "Look, you'd better get on this bus because the next one isn't due for a long time and you really don't want to end up standing in the middle of nowhere by yourself." That requires being in a very strong position, and that position can be reached most directly by showing explicitly that our theoretical conclusions do in fact apply to the work others have been doing, see Psych Rev, vol xxx, number yyy, page zzz.

As I said before, your approach of using familiar experimental results to make your case might work and I hope it does, but as I'm saying here there is only one way likely to give your work the effect you want. Do the work yourself or design the experimental procedures and persuade someone with the resources to do it. That's what will make the difference between tilting at windmills and operating them yourself.

Best,

Bill P.

[From Rick Marken (2010.07.05.1445)]

Bill Powers (2010.07.05.1007 MDT)

Rick Marken (2010.07.05.0840) --

RM: The Power Law says that the effect of the IV on the DV is a power
function: DV = k*IV^p. The Weber-Fechner Law says that the effect of
IV on DV is a log function: DV = k log (IV).

BP: When you put them together like that, it's easy to see the difference
between Stevens' power law and Weber-Fechner:...

I really wonder if the data are good enough to tell the difference.

I think so, especially when the power is >1, which it looks like it is
for electric shock.

My paper (http://www.mindreadings.com/BehavioralIllusion.pdf) aims to
show that the power function relationship between IV and DV observed
in magnitude estimation experiments _could be_ a behavioral illusion
if the feedback function relating DV (the magnitude estimates) to CV
(relationship between perceived magnitude estimates and perceived
value of the IV) is logarithmic.

The "could be" is the issue -- no need to prove it is. If it could be, then
anyone investigating perception has to rule it out before ignoring it. This
is generally the case for PCT interpretations as opposed to conventional
ones.

Pretty much. Though the main reason for the "could be" is to not give
the impression that we know that it is for sure a behavioral illusion.
The point was to illustrate, using a familiar phenomenon, what the
behavioral illusion _is_.

Unfortunately (what is turning into) the other side is not going to do the
work for us. The only way we will ever establish that the illusion is a real
possibility is to do the experiment ourselves and show that indeed the
behavioral illusion does exist, by altering the feedback function and
showing that the result does, indeed, change accordingly. And it has to be a
clean, by-the-book replication (including the original duplicating the
original results as well as the new form of the feedback function giving
different results, and the right different results).

I can't think of any way to do this in a magnitude estimation task
except by changing the part of the feedback loop to which we have
access -- the connection from muscle output to numerical response. If
we replicated the experiment with this manipulation I think the
audience would not be any more impressed than the were by your
beautiful demonstration of the behavioral illusion in the Psych Review
paper.

We are, after all, dealing with scientists even if they are also human in
wanting to protect their careers. We have to satisfy the scientist, not the
fearful defender behind the crumbling fortifications. We may have to be
diplomatic and make excuses for them and avoid gloating in triumph over them
and saying I Told You So, but the time is coming soon when we also need to
say "Look, you'd better get on this bus because the next one isn't due for a
long time and you really don't want to end up standing in the middle of
nowhere by yourself." That requires being in a very strong position, and
that position can be reached most directly by showing explicitly that our
theoretical conclusions do in fact apply to the work others have been doing,
see Psych Rev, vol xxx, number yyy, page zzz.

I can't see how to do that with the magnitude estimation task but I
think I'm working my way toward such a demo using the reaction time
task.

As I said before, your approach of using familiar experimental results to
make your case might work and I hope it does, but as I'm saying here there
is only one way likely to give your work the effect you want. Do the work
yourself or design the experimental procedures and persuade someone with the
resources to do it.

I'm doing that with you now; you're the one with the resources (the
TrackAnalyze program mainly) to analyze the reaction time version of
the tracking task.

That's what will make the difference between tilting at windmills and operating
them yourself.

I haven't felt like I've been tilting at windmills. I feel like I've
been operating a windmill repair stand in a town of broken windmills
where people are getting paid to keep their windmills in disrepair. So
every now and then I take on the job of insuring that someone's
windmill stays broken (I am a statistical consultant after all;-).

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.06.0030 MDT)]

[From Rick Marken
(2010.07.05.1445)]

Bill Powers (2010.07.05.1007 MDT)

RM: The Power Law says that the effect of the IV on the DV is a
power

function: DV = k*IV^p. The Weber-Fechner Law says that the
effect of

IV on DV is a log function: DV = k log (IV).

BP: When you put them together like that, it’s easy to see the
difference

between Stevens’ power law and Weber-Fechner:…

I really wonder if the data are good enough to tell the
difference.

RM: I think so, especially when the power is >1, which it looks like
it is

for electric shock.

BP: I’m not so sure. Look at these data from a 1924 article by Selig
Hecht (all I could quickly find):

No error bars are given but it’s quite clear that dI/I is certainly not
constant over the whole range, or even any small part of it except in the
right-hand part where it varies “only” something like 200% to
400%. Between 10 and 100 millilamberts it seems fairly constant - 1/9 of
the total logarithmic range or one billionth of the linear range of the
chart. I’ve seen similar curves for sound intensity. It’s pretty clear
that a power law wouldn’t fit this curve, either. Who is kidding whom?
" … it’s what you believe that ain’t so."

Best,

Bill P.

[From Bill Powers (2010.07.06.0055 MDT)]

David Goldstein (2010.07.05.12.48 EDT)

Bill, Rick, Martin and any others interested. I am attaching an email of the research I mentioned.

David, how about summarizing and analyzing the results for us (me) -- I'm rushed in getting ready for the Manchester meeting.

Bill

···

In rereading it, I see that it was not within-subject design as a PCT would do.

I think it addresses the issue of whether conventional research could distinguish between perceptual effects from nonperceptual effects. It seems to.

The physical matching method of responding comes closer to a PCT approach.

David

About: [Bill Powers (2010.07.06.0055 MDT)]

Bill: David, how about summarizing and analyzing the results for us (me) -- I'm rushed in getting ready for the Manchester meeting.

David: Here is a summary.

A subject was asked to judge the length of a line. In one condition, a subject would call out the length of the line in inches to the nearest one-quarter of an inch. In another condition, a subject would move two markers so that the distance between them was the same as the length of the line.In one condition, the target line was the shortest one. In another condition it was the longest one. The "context effect" was the difference in judgement of the target line length made by being the shortest or longest line in the set. A context effect was obtained for all conditions involving verbal estimates, but not for all conditions involving matching. How a person makes the judgement made a difference in the judged length. The context effect is not a pure perceptual experience.

···

From:[David Goldstein (2010.07.06.04:50 EDT)]

[From Bill Powers (2010.07.06.0851 MDT)]

David Goldstein (2010.07.06.04:50 EDT) --

DMG: A subject was asked to judge the length of a line. In one condition, a subject would call out the length of the line in inches to the nearest one-quarter of an inch. In another condition, a subject would move two markers so that the distance between them was the same as the length of the line.In one condition, the target line was the shortest one. In another condition it was the longest one. The "context effect" was the difference in judgement of the target line length made by being the shortest or longest line in the set. A context effect was obtained for all conditions involving verbal estimates, but not for all conditions involving matching. How a person makes the judgement made a difference in the judged length. The context effect is not a pure perceptual experience.

I don't understand. What is the "target line"? Are there two lines? You say "shortest in the set." In what set? Was there a whole bunch of lines? I guess I just have to read the paper.

OK, I've read most of it. One by one, each line in a set of seven horizontal lines with different lengths was presented briefly (for 0.2 second or 1 second) on a projection screen. A subject estimated each length verbally, in inches and quarter inches, in one set of trials, or set the visual distance between two sliding markers to be the same as the length, in another set of trials. Two different sets of lines were used, a long set and a short set, with one medium-length line (called the "trace" line) being of the same length in both sets. The trace line was the longest line in the short set, and the shortest line in the long set. It was always the last one projected, while the other 6 lines were presented in random order.

The judgements of the length of the trace line were used to measure a "context" effect: the trace was found to be estimated as shorter when it appeared in the short set than when it appeared in the long set, but only for the verbal estimates.

Eighty subjects were assigned to 8 groups in a 2 x 2 x 2 factorial design.

This enormously complex experiment, with several unnoticed experimental variables that were not investigated, produced practically no information. "Main effects due to mode of response and stimulus duration were not reliable. Only the Context by Mode of Response interaction reached significance." Nothing was said about the presentation of the trace stimulus as the last element in every series, or the fact that the judgement always had to be made about a stimulus that was no longer present, or that in the length-comparison judgement, one stimulus was present but had to be compared with a different one that was being remembered.

The difference between the two methods of judging length was striking in several dimensions. Here are the results for the longer stimulus presentation, 1 second, showing the mean minus standard deviation, mean, and mean plus standard deviation in each case. The actual length of the trace line was 216 mm (8-1/2 inches).

VERBAL Short set 196.2 206.5 216.8
           Long set 137.1 148.3 159.5

MATCHING Short set 198.1 204.1 210.3
           Long set 199.1 205.0 210.9

Subjects were asked to give their verbal estimates to the nearest quarter-inch, or 6.3 mm, which is just about the size of the standard deviation of the matching data. One wonders if all the data were recorded to the nearest quarter inch and then converted to the nearest tenth of a millimeter (!),

The matching measures are so consistent that one has to wonder why they consistently underestimated the length of the trace line by 11 to 12 millimeters on the average, which is 2 standard deviations. A hint might be obtained from the fact that the movable markers used to indicate length in the matching case were 6.4 mm wide. If most subjects used the outer edges of the two markers to indicate length, while the scale (not visible to the subject) was based on the inner edges of the markers, the measurements would be 12.8 millimeters low, just about the observed underestimation. But who knows? There's no report of asking the subjects how they were making the comparisons. subjects were instructed to use the inner edges, but did they?

On the other hand, the verbal estimates are also consistently low, and in both kinds of estimates the estimates are lower when the stimulus is presented more briefly. There was no measurement of the elapsed time between the presentation and the time the estimate was given, so the rate of decay of the remembered line length is unknown. The presentations were always given at 15-second intervals; no investigation was made of the effect of shortening or lengthening the intervals, or of varying them at random.

For an undergraduate study this is probably the best your brother could have done, given the orientation of his teachers. But it is a meaningless experiment. Even Fig. 1 is contaminated by the presentation once known as the Time Magazine chart: Time Magazine used to present charts showing huge-looking changes in variables like rainfall, with headlines saying "drought threatens!", with a vertical scale that ran from 10 inches of rainfall at the bottom to 10.5 inches at the top. At least your brother's chart has a couple of marks on the vertical scale (which runs from 150 to 200) reminding the reader that the zero of the vertical axis is about where Table 1 starts near the bottom of the page. These effects are not very big.

Also, as you mention, the results are group averages, so they say nothing about any individual's characteristics. If you want to use groups of eight people to make estimates of lengths, this experiment can tell you some useful things about one of the conditions. The main message is, "Don't use verbal estimates to determine lengths." But since we only use them when accuracy isn't important, and use rulers when it is, we haven't learned much.

Relevance to the current thread is minimal, since no attempt was made to determine the relationship between actual lengths and estimated lengths

Best,

Bill

[From Rick Marken (2010.07.06.1140)]

Bill Powers (2010.07.06.0030 MDT)]

Rick Marken
(2010.07.05.1445)]

BP: I really wonder if the data are good enough to tell the
difference.

RM: I think so, especially when the power is >1, which it looks like
it is

for electric shock.

BP: I’m not so sure. Look at these data from a 1924 article by Selig
Hecht (all I could quickly find):

That’s discrimination data. I thought you were talking about magnitude estimation data. But I’ve looked at some references on the Power law, trying to get a hold of some data (which I suppose I could easily collect myself) and there certainly are issues with finding the “best fitting” functions to the data. I can see why you wanted to drop the whole thing.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.06.1345 MDT)]

BP: I’m not so sure. Look at these data from a 1924 article by Selig
Hecht (all I could quickly find):

RM: That’s discrimination data. I thought you were talking about
magnitude estimation data.

BP: I was. This curve is just
the slope of the log function.

d/dt(log(x) = dx/x

Bill

[From Rick Marken (2010.07.06.2130)]

Bill Powers (2010.07.06.1345 MDT)]

BP: I'm not so sure. Look at these data from a 1924 article by Selig Hecht
(all I could quickly find):

RM: That's discrimination data. I thought you were talking about magnitude
estimation data.

BP: I was. This curve is just the slope of the log function.

But the points on the graph are data, right? And they are measures of
"jnds" (delta I/I needed to determine that I + delta I and I are
different). That is, the y axis is jnd and the x axis is Iog I. The
data I'm talking about are magnitude estimates. It's the magnitude
estimates that are supposed to be a power function of I. So on a log
-log plot, log magnitude estimate should be a linear function of log
I, the slope being the exponent of the power function.

But this seems to me like a digression from my main interest here. I
still would like to know why Martin thinks the data from
psychophysical experiments tells us something about the closed-loop
systems under study and why David thinks the study he posted is the
kind of study that is consistent with a PCT approach to understanding
behavior. If I can find out why they think these things maybe I (we?)
can develop experiments that will test their ideas.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.06.2353 <DT)]

Rick Marken (2010.07.06.2130) --

> RM: That's discrimination data. I thought you were talking about magnitude
> estimation data.
>
> BP: I was. This curve is just the slope of the log function.

But the points on the graph are data, right? And they are measures of
"jnds" (delta I/I needed to determine that I + delta I and I are
different). That is, the y axis is jnd and the x axis is Iog I. The
data I'm talking about are magnitude estimates.

These data are a different way of representing a set of magnitude estimates. Rather than measuring and presenting p = k*log(i), the experimenter measures and presents dp/di = k/i, which is just the result of taking the first derivative with respect to i of both sides of the log equation. The method of obtaining the curve is different but the result is the same. If you integrate the curve it should represent a set of magnitude estimates.

Or rather, it would represent the estimates if the mathematical curve actually represented the data, which it doesn't. Since the di/i = constant form doesn't fit the data, the integral of that curve couldn't fit the magnitude estimate data either, except over narrow ranges of intensities over which dI/I is approximately constant. The whole argument about log functions versus power laws is empty if the data are all like this: the truth seems to be that perception of light intensity doesn't follow either the log or the power law for the experience of light intensity over anything like the whole range of intensities actually experienced, so we can just forget the whole thing with respect to perceiving light intensity. And who knows with respect to how many other data sets of this kind for other sensory variables?

RM: But this seems to me like a digression from my main interest here. I
still would like to know why Martin thinks the data from
psychophysical experiments tells us something about the closed-loop
systems under study

BP: We can say that Martin assumes a model that is basically open-loop because the only actual loop is inside the system and doesn't pass through the environment. It isn't a control system of the usual PCT type. The subjects are controlling a relationship between a real light onset and an imaginary button press. This can be done whether or not there is any real button press. If the subjects are temporarily in this configuration they are behaving open-loop, which we could prove by inserting appropriate disturbances of the output effects and showing that they are not opposed. We could fool the lower system into sensing that the button had been depressed when it actually did not make contact. A system of the kind Martin modeled wouldn't know the difference.

On the other hand, I think that if such a test were actually done, the disturbances would result in some kind of opposition, showing that the controlled variable is not just in imagination. Since the test hasn't been done, and probably won't be done, we have to leave it there. If we ever find out whether Martin's model is or isn't the appropriate one, we can take it up again. There would still be disagreements, perhaps, but they would no longer be about whether this is a closed-loop control system of the type we study in PCT.

RM: ... and why David thinks the study he posted is the
kind of study that is consistent with a PCT approach to understanding
behavior. If I can find out why they think these things maybe I (we?)
can develop experiments that will test their ideas.

BP: I don't think he claims it is consistent with the PCT approach. In personal communications, he indicates that this sort of study is useful whether or not it is consistent with PCT because it shows that "context effects" alter what we would call perceptual input functions: magnitude estimates for one stimulus object given verbally as numbers are influenced by the magnitudes of preceding similar stimulus objects presented in the same general time frame. Of course this calls into question all claims that there is one simple kind of function operating in perceptions of all types, so the function can change even with the same stimulus object being presented again.

This would not affect a PCT model except that we would have to allow the perceptual input function to change instead of assuming it is constant. When that has to be done, it would be because a model with a constant input function didn't reproduce the data as well as one with a suitable variable input function. So far, that hasn't come up, but if it does all we have to do is substitute the correct variable function (having identified and incorporated into the model the other variables on which the form of the function depends).

The existence of context effects may simply show that we have misidentified the perceptual input function, or the controlled variable. Maybe the subjects were not actually reporting a perception of what the experimenter thinks of as the objective length of a line, but were actually perceiving and reporting on a variable dependent on the average of recent values of the perceptions of individual lines. For example, if light intensities were being judged in an experiment of this kind, a context effect would surely occur if only because of the iris reflex which operates quite slowly, and even slower changes in retinal pigments which bleach at high intensities and are restored slowly at lower intensities. If a set of low light intensities were presented, the iris would open up somewhat, increasing the amount of light reaching the retina, and the retinal sensitivity itself would increase somewhat. With a set of higher intensities, the iris would close down somewhat and the retina would become less sensitive. The "trace" intensity (which would be the last one in a series of presented intensities) would be perceived as more intense when the iris was open (etc) than it was when the iris was less open. This would not require a variable perceptual input function but only a control system that alters the external feedback function used by other control systems.

A more general possibility is that perceptions at some levels work as Edwin Land demonstrated for color perception. There seems to be local feedback inside the perceptual input functions of vision, so that colors are perceived as if the average color over the whole retina is grey. The signals are, in other words, normalized to grey. Thus perceptions of color in one part of the retina are strongly influenced by colors in other parts of the retinal scene. Again, such effects call into question the whole psychophysical scheme, since there is no one function relating perceived color intensity to actual color intensity.

But once again, this doesn't affect the PCT model; it just affects details of how we would model perceptual input functions if ever we had experimental data requiring that sort of function to explain them.

As in most psychological experiments, the results may be of some interest to some specialists, but they are all peripheral and unimportant issues in the context of introducing a drastically new paradigm into the life sciences. I am entirely focused on introducing that new paradigm, which is going to change what experiments are done and why they are done. Phenomena like classical or operant conditioning, context effects, this or that nonlinearity of input functions, are all minor issues compared with the major one of whether we should adopt PCT or retain behaviorism, cognitive science, neuroscience, or any of the other theoretical schemes that don't (correctly) take feedback effects into account. I am trying to get people to look up from the tiny details they have been concentrating on, to see that the entire landscape has changed.

Best,

Bill P.

[From Rick Marken (2010.07.07.1030)]

Bill Powers (2010.07.06.2353 <DT)

Rick Marken (2010.07.06.2130) --

BP: These data are a different way of representing a set of magnitude estimates.

I don't think so. These data are discrimination data; they have
nothing to so with magnitude estimation. Steven's wasn't even born
when some of this data was collected (Aubert, 1865!!). As I said, the
y axis is di/i, which is the "difference threshold" or jnd measured
using Weber's techniques. That is, each point on the graph, indicating
a value of di/i for a particular value of i, was obtained in a
difference threshold experiment, where di was adjusted until the
subject indicated that the stimulus with magnitude i is just
noticeably different (jnd) from the stimulus of magnitude i + di.
Fechner assumed these jnds are psychologically equal in size: di/i =
dp = k. If this assumption is true then you get the psychophysical
function relating i to p by integrating dp/di = k/i, which is a log
function if di/i is indeed a constant, which it clearly is not in the
data you show.

Rather than measuring and presenting p = k*log(i), the experimenter measures
and presents dp/di = k/i, which is just the result of taking the first
derivative with respect to i of both sides of the log equation. The method
of obtaining the curve is different but the result is the same. If you
integrate the curve it should represent a set of magnitude estimates.

That's what _should_ happen if Fecher's log law is correct. But
obviously Fechner's log law can't by right, given the data in the
Hecht graph. But Fechner's log law is completely theoretical, based on
the untestable assumption that jnd's (di/i) are psychologically equal
and the clearly false assumption that the observed value of di/i is a
constant over the entire range of I.

The log law is based on discrimination data. You measure how much di
must be added to different starting values of i for the subject to be
able to reliably state that stimulus with value i differs form the
stimulus with value i + di. The power law is based on magnitude
estimation data. You ask a subject to generate a number whose size
relative to a modulus number is proportional to the psychological
magnitude of the stimulus with intensity i relative to a modulus
stimulus with intensity m. The power law is obtained by finding the
function that is the best fit to the actual relationship between i and
the magnitude estimates. The assumption here is that the magnitude
estimate numbers are directly proportional to the experienced
psychological intensity of the stimuli.

This is all just standard psychophysics. The fact that Fechner's log
law is probably wrong has nothing to do with the observed shape of the
functions relating i to magnitude estimates. And those relationships
tend to be well approximated by power functions. Our (now my) power
law paper shows only that IF the actual psychophysical function
relating i to psych magnitude (p) is logarithmic, then the observed
power law relationship between i and magnitude estimates (or magnitude
productions) is an example of a behavioral illusion if the closed loop
model of the magnitude estimation task is correct.

In fact we don't and probably never will know the_actual_
psychophysical law relating i to psych magnitude, though I'm inclined
to go with you in thinking that the relationship is probably very
close to linear through most of the perceptible range of i since this
function works so well in our models. But I should point out that data
like those in the Hecht paper that you posted don't prove that the
psychophysical function is _not_ a log function either. Those data
were obtained in conventional experiments; the plotted value of di/i
is simply the setting of the instruments that was associated with a
"yes, they are different" response 75% of the time. Since we don't
know what the subject's are controlling for in these experiments, the
"too large" values of di/i required at low values of i may be because
the psychological values of these stimuli are less different in this
range (violating Fechner's assumption about the psychological value of
jnds) or it could be something else.

RM: But this seems to me like a digression from my main interest here. I
still would like to know why Martin thinks the data from
psychophysical experiments tells us something about the closed-loop
systems under study

BP: We can say that Martin assumes a model that is basically open-loop
because the only actual loop is inside the system and doesn't pass through
the environment. It isn't a control system of the usual PCT type.

Yes, this seems easily testable.

RM: ... and why David thinks the study he posted is the
kind of study that is consistent with a PCT approach to understanding
behavior. If I can find out why they think these things maybe I (we?)
can develop experiments that will test their ideas.

BP: I don't think he claims it is consistent with the PCT approach. In
personal communications, he indicates that this sort of study is useful
whether or not it is consistent with PCT because it shows that "context
effects" alter what we would call perceptual input functions:

Yes, I suppose every experiment seems like it could be useful in some
way. I am suspicious of all data collected under the assumption of
open-loop causality, partly because it is average-based but also
because such results could be a behavioral illusion.

As in most psychological experiments, the results may be of some interest to
some specialists, but they are all peripheral and unimportant issues in the
context of introducing a drastically new paradigm into the life sciences. I
am entirely focused on introducing that new paradigm, which is going to
change what experiments are done and why they are done.

Me too. But I don't think this is going to happen until people see the
relevance of this new paradigm to what they are doing now. Scientific
psychologists are not really very good at seeing that they are using a
particular paradigm. Either that, or they don't seem to realize that
the open loop paradigm they are using _is_ a paradigm and that it
might be wrong. If scientific psychologists really understood that
their work is based on a paradigm that might be wrong then your 1978
paper would have sealed the deal. There would have been a collective
cry of "holy shit" and everyone would have abandoned the open loop
paradigm and started studying organisms as though they might be closed
loop systems (as I did). But they didn't and I think it's mainly
because they don't think their paradigm is "optional". It simply _is_
the scientific paradigm. This, and the fact that the open loop causal
paradigm seems to fit the results of their experiments just fine --
statistically, anyway -- makes it hard to make a case for the
closed-loop paradigm. That's why I'm trying to show that the
apparently open-loop nature of behavior in conventional experiments is
closed loop, just like it is in the tracking task.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.07.1445 MDT)]

Rick Marken (2010.07.07.1030) --

> BP: These data are a different way of representing a set of magnitude estimates.

RM: I don't think so. These data are discrimination data; they have
nothing to so with magnitude estimation. Steven's wasn't even born
when some of this data was collected (Aubert, 1865!!).

BP: Measuring just-discriminable-differences is a way to measure a log perception function (or any other), assuming that there is a constant discrimination size for the internal perception. If the function is a log function, then as the intensity of the stimulus increases, the size of a step-increase in the stimulus has to become larger in order to produce a constant size of step in the perceptual signal.

To look at it from the other side, start with a data set showing a set of magnitude estimates as a function of physical stimulus magnitude measurements. If you now look at the table to see how much change in stimulus is needed to generate a constant step-change in magnitude estimate, you will generate another table that looks like a difference threshold table. If the magnitude estimates truly follow a log function of stimulus intensity, the difference threshold will be constant fraction of intensity. That is, if the threshold is 1 unit of change at a stimulus intensity of 10, it will be 10 units at a stimulus intensity of 100. Since it is not (in the data I attached), the data clearly show that the magnitude-estimation function, if obtained by magnitude estimates, would not be logarithmic. It doesn't matter which way the log function is determined; if the basic assumption, that the internal threshold is constant, is true, either way can be used to check for a log function (or any other form, actually, since the JND is just measuring the first derivative of whatever the magnitude estimation function is).

I would probably trust the jnd method more than a numerical magnitude estimate because it doesn't rely on verbal communication, and it doesn't raise the question "How big is a number?". The whole approach of numerical estimation strikes me as pretty naive. How long is one inch? The first thing you would have to do is have the subject show you with thumb and finger, and measure it with a ruler. So how long is a ruler? There isn't any absolute scale of perception. The best you can do is measure impulses per second of perceptual signal, and that still doesn't tell you how big n impulses per second seems to consciousness. Everything has to be compared with something else.

Best,

Bill P.

···

As I said, the
y axis is di/i, which is the "difference threshold" or jnd measured
using Weber's techniques. That is, each point on the graph, indicating
a value of di/i for a particular value of i, was obtained in a
difference threshold experiment, where di was adjusted until the
subject indicated that the stimulus with magnitude i is just
noticeably different (jnd) from the stimulus of magnitude i + di.
Fechner assumed these jnds are psychologically equal in size: di/i =
dp = k. If this assumption is true then you get the psychophysical
function relating i to p by integrating dp/di = k/i, which is a log
function if di/i is indeed a constant, which it clearly is not in the
data you show.

> Rather than measuring and presenting p = k*log(i), the experimenter measures
> and presents dp/di = k/i, which is just the result of taking the first
> derivative with respect to i of both sides of the log equation. The method
> of obtaining the curve is different but the result is the same. If you
> integrate the curve it should represent a set of magnitude estimates.

That's what _should_ happen if Fecher's log law is correct. But
obviously Fechner's log law can't by right, given the data in the
Hecht graph. But Fechner's log law is completely theoretical, based on
the untestable assumption that jnd's (di/i) are psychologically equal
and the clearly false assumption that the observed value of di/i is a
constant over the entire range of I.

The log law is based on discrimination data. You measure how much di
must be added to different starting values of i for the subject to be
able to reliably state that stimulus with value i differs form the
stimulus with value i + di. The power law is based on magnitude
estimation data. You ask a subject to generate a number whose size
relative to a modulus number is proportional to the psychological
magnitude of the stimulus with intensity i relative to a modulus
stimulus with intensity m. The power law is obtained by finding the
function that is the best fit to the actual relationship between i and
the magnitude estimates. The assumption here is that the magnitude
estimate numbers are directly proportional to the experienced
psychological intensity of the stimuli.

This is all just standard psychophysics. The fact that Fechner's log
law is probably wrong has nothing to do with the observed shape of the
functions relating i to magnitude estimates. And those relationships
tend to be well approximated by power functions. Our (now my) power
law paper shows only that IF the actual psychophysical function
relating i to psych magnitude (p) is logarithmic, then the observed
power law relationship between i and magnitude estimates (or magnitude
productions) is an example of a behavioral illusion if the closed loop
model of the magnitude estimation task is correct.

In fact we don't and probably never will know the_actual_
psychophysical law relating i to psych magnitude, though I'm inclined
to go with you in thinking that the relationship is probably very
close to linear through most of the perceptible range of i since this
function works so well in our models. But I should point out that data
like those in the Hecht paper that you posted don't prove that the
psychophysical function is _not_ a log function either. Those data
were obtained in conventional experiments; the plotted value of di/i
is simply the setting of the instruments that was associated with a
"yes, they are different" response 75% of the time. Since we don't
know what the subject's are controlling for in these experiments, the
"too large" values of di/i required at low values of i may be because
the psychological values of these stimuli are less different in this
range (violating Fechner's assumption about the psychological value of
jnds) or it could be something else.

>> RM: But this seems to me like a digression from my main interest here. I
>> still would like to know why Martin thinks the data from
>> psychophysical experiments tells us something about the closed-loop
>> systems under study
>
> BP: We can say that Martin assumes a model that is basically open-loop
> because the only actual loop is inside the system and doesn't pass through
> the environment. It isn't a control system of the usual PCT type.

Yes, this seems easily testable.

>> RM: ... and why David thinks the study he posted is the
>> kind of study that is consistent with a PCT approach to understanding
>> behavior. If I can find out why they think these things maybe I (we?)
>> can develop experiments that will test their ideas.
>
> BP: I don't think he claims it is consistent with the PCT approach. In
> personal communications, he indicates that this sort of study is useful
> whether or not it is consistent with PCT because it shows that "context
> effects" alter what we would call perceptual input functions:

Yes, I suppose every experiment seems like it could be useful in some
way. I am suspicious of all data collected under the assumption of
open-loop causality, partly because it is average-based but also
because such results could be a behavioral illusion.

> As in most psychological experiments, the results may be of some interest to
> some specialists, but they are all peripheral and unimportant issues in the
> context of introducing a drastically new paradigm into the life sciences. I
> am entirely focused on introducing that new paradigm, which is going to
> change what experiments are done and why they are done.

Me too. But I don't think this is going to happen until people see the
relevance of this new paradigm to what they are doing now. Scientific
psychologists are not really very good at seeing that they are using a
particular paradigm. Either that, or they don't seem to realize that
the open loop paradigm they are using _is_ a paradigm and that it
might be wrong. If scientific psychologists really understood that
their work is based on a paradigm that might be wrong then your 1978
paper would have sealed the deal. There would have been a collective
cry of "holy shit" and everyone would have abandoned the open loop
paradigm and started studying organisms as though they might be closed
loop systems (as I did). But they didn't and I think it's mainly
because they don't think their paradigm is "optional". It simply _is_
the scientific paradigm. This, and the fact that the open loop causal
paradigm seems to fit the results of their experiments just fine --
statistically, anyway -- makes it hard to make a case for the
closed-loop paradigm. That's why I'm trying to show that the
apparently open-loop nature of behavior in conventional experiments is
closed loop, just like it is in the tracking task.

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.07.1445 MDT)]

Rick Marken (2010.07.07.1030) --

> BP: These data are a different way of representing a set of magnitude estimates.

I don't think so. These data are discrimination data; they have
nothing to so with magnitude estimation. Steven's wasn't even born
when some of this data was collected (Aubert, 1865!!).

Measuring just-discriminable-differences is a way to measure a log perception function, assuming that there is a constant discrimination size for the internal perception. As the intensity of the stimulus increases, the size of a step-increase in the stimulus has to become larger in order to produce a constant size of step in the perceptual signal. That is a characteristic of a log function.

To look at it from the other side, start with a data set showing a set of magnitude estimates as a function of physical stimulus magnitude measurements. If you now look at how much change in stimulus is needed to generate a constant step-change in magnitude estimate, you will generate a table that looks like a difference threshold table. If the magnitude estimates truly follow a log function of stimulus intensity, the difference threshold will be a constant fraction of the intensity. That is, if the threshold is 1 unit of change at a stimulus intensity of 10, it will be 10 units at a stimulus intensity of 100. The ratio of difference threshold to stimulus intensity will be a constant. Any curve that has that characteristic is a logarithmic curve.

In the data graph I attached, a log function would correspond to a differencet-threshold plot that would be a horizontal straight line across the graph. The jnd would be a constant fraction of the stimulus intensity.

As I said, the
y axis is di/i, which is the "difference threshold" or jnd measured
using Weber's techniques. That is, each point on the graph, indicating
a value of di/i for a particular value of i, was obtained in a
difference threshold experiment, where di was adjusted until the
subject indicated that the stimulus with magnitude i is just
noticeably different (jnd) from the stimulus of magnitude i + di.
Fechner assumed these jnds are psychologically equal in size: di/i =
dp = k. If this assumption is true then you get the psychophysical
function relating i to p by integrating dp/di = k/i, which is a log
function if di/i is indeed a constant, which it clearly is not in the
data you show.

Now you seem to be agreeing with me -- if you're disagreeing, I don't know what you're disagreeing about.

The only hitch possible here is that the psychologically constant jnd is not in fact constant but depends on the perceptual signal's magnitude in some unknown way. But it seems to me that would then show up in magnitude estimates, too, in an exactly corresponding way.

Could it be that you're objecting because the JND curve rules out a logarithmic magnitude-estimate curve, but uyou believe that the magnitude estimate curve is actually logarithmic?

Best,

Bill P.

···

> Rather than measuring and presenting p = k*log(i), the experimenter measures
> and presents dp/di = k/i, which is just the result of taking the first
> derivative with respect to i of both sides of the log equation. The method
> of obtaining the curve is different but the result is the same. If you
> integrate the curve it should represent a set of magnitude estimates.

That's what _should_ happen if Fecher's log law is correct. But
obviously Fechner's log law can't by right, given the data in the
Hecht graph. But Fechner's log law is completely theoretical, based on
the untestable assumption that jnd's (di/i) are psychologically equal
and the clearly false assumption that the observed value of di/i is a
constant over the entire range of I.

The log law is based on discrimination data. You measure how much di
must be added to different starting values of i for the subject to be
able to reliably state that stimulus with value i differs form the
stimulus with value i + di. The power law is based on magnitude
estimation data. You ask a subject to generate a number whose size
relative to a modulus number is proportional to the psychological
magnitude of the stimulus with intensity i relative to a modulus
stimulus with intensity m. The power law is obtained by finding the
function that is the best fit to the actual relationship between i and
the magnitude estimates. The assumption here is that the magnitude
estimate numbers are directly proportional to the experienced
psychological intensity of the stimuli.

This is all just standard psychophysics. The fact that Fechner's log
law is probably wrong has nothing to do with the observed shape of the
functions relating i to magnitude estimates. And those relationships
tend to be well approximated by power functions. Our (now my) power
law paper shows only that IF the actual psychophysical function
relating i to psych magnitude (p) is logarithmic, then the observed
power law relationship between i and magnitude estimates (or magnitude
productions) is an example of a behavioral illusion if the closed loop
model of the magnitude estimation task is correct.

In fact we don't and probably never will know the_actual_
psychophysical law relating i to psych magnitude, though I'm inclined
to go with you in thinking that the relationship is probably very
close to linear through most of the perceptible range of i since this
function works so well in our models. But I should point out that data
like those in the Hecht paper that you posted don't prove that the
psychophysical function is _not_ a log function either. Those data
were obtained in conventional experiments; the plotted value of di/i
is simply the setting of the instruments that was associated with a
"yes, they are different" response 75% of the time. Since we don't
know what the subject's are controlling for in these experiments, the
"too large" values of di/i required at low values of i may be because
the psychological values of these stimuli are less different in this
range (violating Fechner's assumption about the psychological value of
jnds) or it could be something else.

>> RM: But this seems to me like a digression from my main interest here. I
>> still would like to know why Martin thinks the data from
>> psychophysical experiments tells us something about the closed-loop
>> systems under study
>
> BP: We can say that Martin assumes a model that is basically open-loop
> because the only actual loop is inside the system and doesn't pass through
> the environment. It isn't a control system of the usual PCT type.

Yes, this seems easily testable.

>> RM: ... and why David thinks the study he posted is the
>> kind of study that is consistent with a PCT approach to understanding
>> behavior. If I can find out why they think these things maybe I (we?)
>> can develop experiments that will test their ideas.
>
> BP: I don't think he claims it is consistent with the PCT approach. In
> personal communications, he indicates that this sort of study is useful
> whether or not it is consistent with PCT because it shows that "context
> effects" alter what we would call perceptual input functions:

Yes, I suppose every experiment seems like it could be useful in some
way. I am suspicious of all data collected under the assumption of
open-loop causality, partly because it is average-based but also
because such results could be a behavioral illusion.

> As in most psychological experiments, the results may be of some interest to
> some specialists, but they are all peripheral and unimportant issues in the
> context of introducing a drastically new paradigm into the life sciences. I
> am entirely focused on introducing that new paradigm, which is going to
> change what experiments are done and why they are done.

Me too. But I don't think this is going to happen until people see the
relevance of this new paradigm to what they are doing now. Scientific
psychologists are not really very good at seeing that they are using a
particular paradigm. Either that, or they don't seem to realize that
the open loop paradigm they are using _is_ a paradigm and that it
might be wrong. If scientific psychologists really understood that
their work is based on a paradigm that might be wrong then your 1978
paper would have sealed the deal. There would have been a collective
cry of "holy shit" and everyone would have abandoned the open loop
paradigm and started studying organisms as though they might be closed
loop systems (as I did). But they didn't and I think it's mainly
because they don't think their paradigm is "optional". It simply _is_
the scientific paradigm. This, and the fact that the open loop causal
paradigm seems to fit the results of their experiments just fine --
statistically, anyway -- makes it hard to make a case for the
closed-loop paradigm. That's why I'm trying to show that the
apparently open-loop nature of behavior in conventional experiments is
closed loop, just like it is in the tracking task.

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.07.07.1920)]

Bill Powers (2010.07.07.1445 MDT)--

Rick Marken (2010.07.07.1030) --

BP: Measuring just-discriminable-differences is a way to measure a log
perception function, assuming that there is a constant discrimination size
for the internal perception.

Yes, as I said.

To look at it from the other side, start with a data set showing a set of
magnitude estimates as a function of physical stimulus magnitude
measurements. If you now look at how much change in stimulus is needed to
generate a constant step-change in magnitude estimate, you will generate a
table that looks like a difference threshold table.

Sure, but now that assumes that a step change in magnitude estimate
reflects a step change in the magnitude of the perception. And all
this is based on an input-output model of the behavior in these
experiments.

RM: As I said, the
y axis is di/i, which is the "difference threshold" or jnd measured
using Weber's techniques. That is, each point on the graph, indicating
a value of di/i for a particular value of i, was obtained in a
difference threshold experiment, where di was adjusted until the
subject indicated that the stimulus with magnitude i is just
noticeably different (jnd) from the stimulus of magnitude i + di.
Fechner assumed these jnds are psychologically equal in size: di/i =
dp = k. If this assumption is true then you get the psychophysical
function relating i to p by integrating dp/di = k/i, which is a log
function if di/i is indeed a constant, which it clearly is not in the
data you show.

BP: Now you seem to be agreeing with me -- if you're disagreeing, I don't know
what you're disagreeing about.

I don't either. I guess I'm disagreeing about your pointing to the
data and saying that the perceptual function couldn't be a log
function, or a power function or whatever. Taking these data at face
value seems to me to require an open loop model of the subject.

My paper on the power law has nothing to do with what the perceptual
function _really_ is. It's just designed to illustrate the behavioral
illusion in the context of a well know psychophysical experiment. It's
just saying that the observed power law relationship that is observed
in magnitude estimation experiments is what would occur if the subject
is controlling a relationship between the perception of S and R and if
the perceptions of S and R are logarithmic.

Could it be that you're objecting because the JND curve rules out a
logarithmic magnitude-estimate curve, but uyou believe that the magnitude
estimate curve is actually logarithmic?

No. I'm objecting (mildly) because we are looking at the results of
discrimination (jnd) and magnitude estimation experiments as though
the behavior of the subjects in these experiments is open loop. I
think all of psychophysics has to be redone using experiments that can
be appropriately analyzed using closed loop models. The psychophysical
functions will then be the perceptual functions that the model
requires to fit the data.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.07.08.0905 MDT)]

Rick Marken (2010.07.07.1920) –

If you now look at how much
change in stimulus is needed to

generate a constant step-change in magnitude estimate, you will
generate a

table that looks like a difference threshold table.

Sure, but now that assumes that a step change in magnitude estimate

reflects a step change in the magnitude of the perception. And all

this is based on an input-output model of the behavior in these

experiments.

It doesn’t assume that the two step-changes are of the same size. One is
the log of the other (but wait a couple of paragraphs).
I would propose that there is a perception Ps of the stimulus variable,
which is one component of qi. The perception varies (to try out the W-F
hypothesis) as the logarithm of stimulus intensity.The subject is told to
utter a number N, the other component of qi, that has the same
perceived magnitude, Pn, as the stimulus perception Ps, so the task is to
control for equality of two perceptions, a simple relationship. As the
stimulus is changed, the perceptual signal Ps changes, disturbing the
equality, and the subject finds another number, Pn, that appears to have
the same magnitude as the new value of Ps.
Well, thanks for bringing this up because this leads to something neither
of us has seen until now.
Suppose that the magnitude of numbers is perceived as the log of stimulus
intensity, while the magnitude of the stimulus is perceived linearly.
When the two perceptual signals Ps and Pn, are adjusted to produce
equality, we have
Ps = k1I
Pn = k2
Log(N)
Ps = Pi, therefore
I = (k2/k1)log(N) = ulog(N)
That doesn’t work; it’s backward. So suppose the stimulus is perceived
logarithmically and the number is perceived linearly:
Ps = k1log(I)
Pn = k2
N
Ps = Pn, therefore
N = (k1/K2)Log(I) = vLog(I)
Ah, there is Weber-Fechner for numerical estimates.
One more: now let both be logarithmic functions:
Ps = k1Log(I)
Pn = k2
Log(N)
Ps = Pn, therefore
Log(I) = (k2/k1)Log(N) = aLog(N) or
a
I = N
… and there is Stevens’ power law, give or take a scaling
factor.
The behavioral illusion does not appear in this, because we’re taking N
as an input quantity, and not focusing on the output action, the
vocalization or writing or pointing behavior that produces N.
We could expand the right side as a Taylor series, and we would have a
statement that I equals a polynomial in N.

http://en.wikipedia.org/wiki/Taylor_series

I’m pretty sure that the first two or three terms in such a series would
fit any experimental data for a group of subjects well within the
scatter. In other words, a log relationship is no more likely to describe
the actual functions than any of a large number of polynomial functions.
I now wish I had Dr. Who’s TARDIS (his time machine), because this is
what I suspected 50 years ago when I wrote that letter about Stevens’
paper in Science. A log function is just one of many possible
nonlinear functions, and some reason other than curve-fitting would be
needed to justify picking that nonlinear function over all the other
candidates – especially when the data do not support the concept of a
log function over more than a limited range of I. If you limit the range
enough, almost any function will fit the data within one sigma, and by
including more terms you can fit a polynomial to almost any data. There
has to be some principled reason for preferring either the log or the
power function, and at the moment I don’t have one.

One day we will have to try a polynomial in the input function of the
tracking experiment to see if we can get a significant improvement in the
fit. We probably can, but the question is whether the increased
complexity of the model gives us a net improvement in the theory. If we
find that the best-fit polynomial turns out to match the log or power
function reasonably well, we might then have some reason to give logs
another chance (I’m always ready to give powers another chance). But
starting with the polynomial introduces less bias than assuming any more
specific form at the start.

Best,

Bill P.