Stevens, Weber-Fechner, and PCT

[From Bill Powers (950223.0500 MST)]
RE: Stevens' Power Law, the Weber-Fechner Law, and PCT.

The following may be a reconstruction of something, or related to
something, I read a few years ago and can't find the reference for. I
would like the opinion of all you perception experts out there.

···

----------------------------------------
Given: stimulus magnitude s and perceptual signal p:

Given: Weber-Fechner law of the form dp =k*ds/s, or p = k*log(s).

The Stevens power law is found by adjusting two stimuli until they are
perceived as equal.

By Weber-Fechner Law,

      p1 = a*log(s1)
      p2 = b*log(s2)

if p1 = p2 then

     a*log(s1) = b*log(s2), or

     s1^a = s2^b

Which is Stevens' power law. Note that the power law expresses only the
relationship between perceptions of different stimuli; it does not
establish an overall scaling factor between perceptions and stimuli,
because

      if s1^a = s2^b then

         p*s1^(ma) = p*s2^(mb)

      where p and m are any numbers.

Therefore there are unknown common factors m and p relating all
perceptions to all stimuli, and no absolute psychophysical scale can be
established.

When Stevens published an article on the power law in Science many years
ago, I wrote a letter asking whether this wasn't simply a version of the
logarithmic law. The letter was rejected because Stevens said that
logarithms have nothing to do with the power law.
---------------------------------
These ruminations arose in the context of setting up a control-system
model with a logarithmic input function. Bruce Abbott had used a model
of control of the ratio of two input quantities in which the error
signal was multiplied by the input quantity in the denominator of the
ratio. This was motivated by the desire to keep the integration factor
in the output quantity equal to 1.0, and to have the control model reach
its steady state in one iteration.

The aim of having the model correct its error in one iteration was
actually unnecessary, because the real person being modeled can't do
that; with a step-disturbance, the real person responds with a finite
time-constant equivalent to perhaps 10 to 30 iterations of the program.

When the controlled quantity is a ratio, however, we find that the best
integration factor (in the particular experiment being done) is over 200
times the best integration factor under the assumption that the
controlled quantity is the difference between the same two variables.
The reason has to do with the loop gain of the control system, which is
the product of the output gain, the external feedback gain, and the gain
in the input function.

In the model using the difference between the two variables, one pixel
of handle movement results in one pixel of change of the input variable
in the numerator of the ratio, which results in one pixel of error
(expressing all units in terms of effects on the display screen).
However, the two input variables had an average magnitude of about 200
to 300 pixels. When the definition of the input function was changed so
the ratio was perceived, one pixel of handle movement now created only
1/200 to 1/300 of a unit of perceptual change in the model.

This meant that the loop gain dropped by a factor of 200 to 300. To make
the model behave like the person, it was necessary to increase the gain
elsewhere in the loop. Bruce increased it in the comparator, but the
increase could just as easily have been put into the input function or
the output function. What mattered was not using the value of one input
variable in the denominator of the ratio as a multiplier, but using a
multiplier of about the right size (which could then remain constant as
the model ran).

The two input variables being modeled were the horizontal and vertical
elements of an inverted T. The horizontal element's length was varied by
the computer, and the participant was to use a mouse to keep the
vertical element the same length. In fact, the participant keeps the
vertical element significantly shorter than horizontal element, thinking
it is equal. This is the T-illusion.

Two hypotheses were tested: that the illusion consists of a constant
difference between the lengths of the elements, and that it consists of
a constant ratio of the two elements. This is how ratio control came
into the model, and how we discovered the need to compensate for the
drop in loop gain that occurs in going from absolute length measures to
ratios. There is a consistent improvement in predictions of the
participant's behavior using the ratio model of perception in comparison
with the length-difference model.

Direct perception of the ratio vertical/horizontal is one possibility.
However, an equivalent perception would be log(vertical) -
log(horizontal), with a different scaling factor. The log-perception
model matches the behavior of the direct-ratio-perception model,
predicting the same magnitude of the illusion within 0.3 per cent and
predicting the handle movements with the same RMS error, 10.2 pixels, or
3 to 5 per cent of the range of the handle movements.

Thus it seems reasonable to say that in all control-system models
involving lengths, we should use an input function that perceives a
constant times the log of the actual length. This leaves some questions.
In a pursuit-tracking experiment, for example, should the position of
both target and cursor be reduced to logs before subtracting target from
cursor to get the magnitude of the perceived relationship? Or should the
target position be subtracted from the cursor position first, and then
the log of the difference be taken to produce the perceptual signal?
Only simulations will show which version fits best, and whether either
one yields improvements in predictions over the linear-perception model.
There are, of course, problems in dealing with the logs of variables
that can go negative, which may have to be handled by using two
perceptual functions to handle deviations in two directions from zero.
---------------------------------------
Logarithmic perception predicts some effects not seen with linear
perception. The main prediction concerns the scale of the presented
figures on the computer screen. In a linear model, the viewing distance
affects the size of image movements on the retina. Also, if the screen
presentation is simply scaled up or down, the same effect occurs. In
either case, the linear model predicts that the loop gain will change
with a change in the retinal size of the presentation, so that the best-
fit value of integration factor determined in the usual way will depend
directly on the visual size of the presentation.

In a logarithmic-perception model, on the other hand, the loop gain
becomes independent of the size of the presentation. A change of scale
is equivalent to a constant added to or subtracted from the perceptual
signal, and thus behaves like a disturbance that the control system will
remove by its normal action. Where ratios are involved, there is no
effect at all because the constant is added to the log of the numerator
and denominator equally and is removed when the logs are subtracted in
the perceptual function. Most importantly, the loop gain will be
unaffected by changes in scale, and we will find the same best value of
the integration factor regardless of scale (until we reach the limits of
the range over which the logarithmic relationship holds true).

Further investigations of these effects seem called for.
------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 950223 14:45]

Bill Powers (950223.0500 MST)

Logarithmic perception predicts some effects not seen with linear
perception. The main prediction concerns the scale of the presented
figures on the computer screen.
...
In a logarithmic-perception model, on the other hand, the loop gain
becomes independent of the size of the presentation.

Yes. In conventional psychophysics and psychophysiology, a logarithmic
relation between input and output (of, say, a sensor) is often assumed
or observed. Computational vision people have shown that a sequence
of levels of logarithmic transformation can permit the (third? level)
output to be independent of scale, location, rotation, and the like.
It is quite powerful that way. I think it should probably be the first
thing tried in modelling the perceptual function of a simulation of
a proposed control system--remembering that the logarithm cannot be
right for very small or very large values of the input. For small
inputs, the output is likely to be near zero no matter how big the input
up to some value where it starts to grow and approach the logarithmic
function, and for large values there is probably some maximum output
beyond which the function saturates. But over a wide range of input, the
logarithm may often be pretty close.

Martin

[From Rick Marken (950223.1315)]

Bill Powers (950223.0500 MST) --

I would like the opinion of all you perception experts out there.

Opinion coming up;-)

By Weber-Fechner Law,

     p1 = a*log(s1)
     p2 = b*log(s2)

if p1 = p2 then

   a*log(s1) = b*log(s2), or

    s1^a = s2^b

Which is Stevens' power law.

When Stevens published an article on the power law in Science many years
ago, I wrote a letter asking whether this wasn't simply a version of the
logarithmic law. The letter was rejected because Stevens said that
logarithms have nothing to do with the power law.

Since Stevensians (people who BELIEVE in Stevens' Law) know all about "cross
modality matching", they know all about the fact that s1^a = s2^b when
you use perceptions of stimuli in one modality, s1, to match perceptions of
stimuli in another, s2. What I think they don't understand is that, from a
PCT perspective, magnitide estimation ALWAYS involves "cross modality
matching"; the subject is always matching one perception to another; indeed,
even the relationship between p1 and p2 (equality) is a perception; it's ALL
perception.

Stevensians seem to think that magnitude estimation using numbers gives the
"real" measure of perceptual magnitude; that is, they act as though
magnitude estimates are not perceptions derived from sensory input. So
when they do cross modality matching they think they are looking at the
"real" exponents (a and b) derived from the "real" method of measuring the
psychophysical function -- numerical magnitude estimation. Cross modality
matching is thought of as a means of "validating" these estimates; in "cross
modality matching" the Stevensians expect to find that the settings of
s1 that match s2 will be proportional to s2^(a/b). And this is, indeed, what
is found. Howver, as you show in you derivation, cross modality matching does
not really validate the original power law relationships (s1^a and s2^b
because those relationships were themselves derived from cross modality
matching of perceptions (of number) that (as you show) could be
logarithmically related to their physical correlates.

Best

Rick

[Martin Taylor 950223 16:40]

Rick Marken (950223.1315) to Bill Powers (950223.0500 MST)

Since Stevensians (people who BELIEVE in Stevens' Law) know all about "cross
modality matching", they know all about the fact that s1^a = s2^b when
you use perceptions of stimuli in one modality, s1, to match perceptions of
stimuli in another, s2. What I think they don't understand is that, from a
PCT perspective, magnitide estimation ALWAYS involves "cross modality
matching"; the subject is always matching one perception to another; indeed,
even the relationship between p1 and p2 (equality) is a perception; it's ALL
perception.

Yes, and I agree with the rest of your posting, too. But there's more,
which might be relevant to some aspect of PCT (don't know quite how, but
it seems likely).

In my summer student days, I did one summer of cross-modality matching,
on something of a grand scale. I had a LOT of modalities, ranging from
the length of a line through the colour of an electrical panel and the
timing of a light flash between two others to the location of a nonsense
syllable in a partially learned list. And many others. All involved
placing something between two endpoints on some discrete or continuous
scale. I might ask "Tell me the syllable that most closely matches
the relative greyness of this patch" (placing a mid-grey patch between
a very dark and a very light one), or "put the clock hand at the orientation
corresponding to where this syllable is in the list."

Most of these dimensions were readily controlled by the subject (as with
the syllable example above), some not. What I was looking for was whether
the cross-modality scaling would be the same if one was presented and the
other used as an answer (A->B vs B->A) and if one got the same kind of
match between A and B if both were used as answer to the presentation
of a third for different third dimensions (is A-B the same in A->C & B->C
as in A->D & B->D), and the same if both were used as presentations (is
A-B the same again in C->A & B->A and D->A & D->B).

It is quite clear that if the subject has control over the "cursor", the
perception of the interval is quite different than if the subject does
not: the A->B match is not the same as the B->A match. If I remember
well without going back to the original data sheets (which I still have,
I think), the perception of the one over which the subject has some influence
is more curved (i.e. the perceived distance from the end points is a
more curvilinear function of the physical distance). To me, this suggests
that the act of controlling some perception changes its magnitude (as a
perceptual signal). The act of perception then may not be passive, as
is normally assumed, and the perceptual function itself may be affected
by the fact that the perceptual signal is being controlled.

All of these data were collected long ago, for quite different reasons,
and never written up properly. But the same effect showed up in my thesis
work, which (in part) involved people doing a similar interpolation task,
but in two dimensions--putting a dot on a blank index card in the position
where they had just seen a dot on another card. If the answer-card already
had a similar dot on it anywhere near the correct position, the answers
were much more accurate than if it didn't, as if the perceived distance
from the edges of the card or the fixed dot was a curvilinear function
that was unaffected by the location of the pencil tip so long as the
subject had control. But once the answer dot had been placed, subjects
often saw their error (which was far from random). I allowed them to
do it again if they said the first attempt was wrong. The distribution
of second answer locations was indistinguishable from that of the first.

This is one of those things I keep in the back of my mind, in the little
bag marked "interesting findings that ought to mean something." Things
perceived while under control are perceived differently from the same
things not under control (not now under control or not controllable now?).

Cross-modal matchings not only don't say anything about magnitude, they also
depend on which perception the subject is controlling.

Fun.

Martin