Pa > Pb etc.; Interval vs frequency

<[Bill Leach 940809.05:32 EST(EDT)]

[Bill Powers (940808.0400 MDT)]

bit for this sort of operation. That is, large errors would be seen as
a reorganization system perceptual error except when an associated gain
was set to zero. Messy!

Not so messy. Raising output gain doesn't increase error, it reduces
error. Raising it too much, of course, causes incipient instability,

I don't have a problem with the idea that raising gain tends to reduce
error and indeed it is the opposite idea that I was questioning. If the
reference for tension (for example) were set to a low value at the same
time that the reference for position were set to some value quite
different from the current perception and considerable resistance was
present in the environment, then the tension control loop would be
satified (little or no error) but the position loops would be in a
condition of large error.

Would such an error be perceived by the reorganization system? It seems
to me that it should not be but then the only "excuse" I can come up with
is that maybe in such cases, the control level might be "below" that
which the reorganizing system senses?


I really have to think about that one a bit. I don't have trouble with
what you are saying but I am not convinced that necessarily means that
the physical reference value has reached a physical limit. That is, just
because I can't perceive myself doing something does not, to me, seem to
have to mean that I have reached the physical maximum prr of the

Of course if perception is generally an inhibitating and reference is an
exciting signal, then my concerns about dynamic range seem pretty
generally solved. That is, the effect of reference can always "exceed"
the effect of perception regardless of relative dynamic range.

And if we accept the theory that reference signals are stored memories
of perceptions, a reference signal with a greater dynamic range than the
perceptual signals produced by real PIFs is impossible.

I am not sure that I can fully accept this assertion. It seems to me
that to do so would preclude the generation of any 'new' behaviour in the
presents of constant environmental conditions. Reorganization might be
the solution to the delimina that I sense but I do not feel satisfied
with that idea yet.

We do occassionally do things that we probably never perceived before
(even in imagination -- claims of 'motivational psychologists'
notwithstanding). I admit to having trouble thinking of how such a thing
could be possible except for the idea that we might be able to drive a
reference to a value that is beyond any value that has previously been
set but I am certainly not clear on my thinking here.

Pa/Pb stuff

This is probably 'the wrong attitude' but just the idea that the
perception is an inhibiting signal is sufficient for my purposes in the

A minor problem that I see is that if both tension and position are
considered, when "dropping" ones arm to ones side then the question
comes up, why would this be accomplished by setting tension to zero but
maintaining the position reference? Seems to me that to control in such
a manner would create an unnecessarily large position error. It seem to
me that both references would need to be set to "zero".


[From Bill Powers (940808.0400 MDT)]

Bill Leach (940807.2005) --

As far as I understand it, the very term "saturate" for a PIF signal
implies things that do not seem likely to me. I recognize that there
is a possibility that some PIFs might reach a signal level where they
require a "significant" amount of time following removal of stimulus
before the PIF signal will begin tracking the new stimulus level. I
also suspect that these signal levels are not for the normal control
loop(s) but rather are those sorts of signals that we describe as pain
and likely cause a perceptual error for control loops that are normally
rather quiescent

Good point, Bill. I let that one slide right past me. About the only
input functions I can think of that could be saturated would be the
first-level intensity receptors: sounds, light intensities, pressures,
tastes, smells that arise from too much direct stimulation. At higher
levels, it's pretty hard to think of what "too much" could possibly
mean. Even at the sensation level, "too much" chocolateness is
impossible; the most "chocolate" you can get is the pure taste you call
chocolate with all the sensation-components in the right proportions.
And what would "too much configuration" be?

When you look at transitions, the primary limit on perception is how
rapidly the underlying perceptions can be reported in two different
states -- how rapidly, for example, a cube-detector can follow a change
in shape with a change of signal. If you look at a spinning object, as
it speeds up the image begins to blur long before you run out of room to
perceive faster motion. The blurring comes from the time constant at the
retina, not from saturation of the transition detector. The same goes
for events; no maximum eventness, just a running-together of the
elements of events. Saturating a relationship-detector or a category-
detector sounds pretty silly; what is too much "on-ness" or too much
"dog-ness?" What's too much sequential ordering or too much long-
division program? In terms of too much to perceive, of course, not too
much in comparison with a reference signal.

When you think of a PIF as a mathematical operation there seems to be no
limit unless you impose one. But if you think of the underlying meaning
of perceptual signals, the concept of saturation doesn't look so

While you may well be right on the idea that the "gain" is lowered to
zero to create such observed conditions as we describe as "limp", I
seriously doubt that is how it is accomplished. For one thing,
changing gain then would require yet another perceptual control system.

Another hit. If gain is varied by a higher system, it must be varied in
order to control something perceivable that depends on gain. In terms of
arm control, there is such a perception: in a low state we call it
relaxation and in a high state we call it tension. We vary it because it
affects the sense of effort involve in an action, and because it affects
the accuracy of control (the tradeoff between speed and precision, for
example). Loop gain in an arm control system can be varied by varying
the level of tension being maintained in opposing muscles, which runs
the muscles up and down their nonlinear square-law output curves -- the
"common-mode" reference signal.

This applies to all parameter control by a higher system. If a parameter
is varied by the output of a higher system, it must be having some
effect on the lower systems that the higher system can perceive.

If that were not bad enough, it seems to me that the general idea of
how the reorganization system operates would have to be "played with" a

bit for this sort of operation. That is, large errors would be seen as
a reorganization system perceptual error except when an associated gain
was set to zero. Messy!

Not so messy. Raising output gain doesn't increase error, it reduces
error. Raising it too much, of course, causes incipient instability,
which we perceive as "tremor" or "overcontrolling" (depending on the
frequency), and we cut back on the gain to re-establish stability. If,
that is, we can perceive some effect that depends on stability.

In a real living control system (at least on that I am a little
familiar with), I can set references for most perceptions such that the
control system can not achieve control.

I can see two ways to do this: try to put your limbs into a position
they can't reach (touch right elbow with right forefinger) or try to
create a motion that your muscles aren't strong enough to produce (like
lifting a car off the ground or swinging your arm in circles at 5
revolutions per second). The limit is set pretty much by the output
limits, not by perceptual limits.

If you try to set a reference level for something that exceeds
perceptual limits, I think you would have a problem. Can you imagine an
arm swinging in circles at 30 revolutions per second? I can't. I can't
even imagine what such a perception would look like. That's the problem,
isn't it? We derive reference signals from memories, and if we can't
find an imaginary image of something it's pretty hard to select it as a
reference signal. Imagine a green that is too green to perceive (as
opposed to too intense to perceive). I say it can't be done.

Try to imagine a relationship that is so much something-or-other that it
can't be perceived. Try to imagine logic that is too complex to
comprehend -- almost a self-contradiction, isn't it? You can imagine not
comprehending it, but you can't imagine what it is that you can't
comprehend. And if you can't imagine it, you can't set it as a reference

Such "personal experience" is not conclusive evidence by any means even
when supported with the logical idea that such living control systems
actually need greater dynamic range for the reference than for the PIF.

And if we accept the theory that reference signals are stored memories
of perceptions, a reference signal with a greater dynamic range than the
perceptual signals produced by real PIFs is impossible.


As to all the Pa and Pb stuff, all I have seen so far is arguments in
words about a quantitative situation. Verbal reasoning just isn't up to
this sort of thing. Do a simulation, or work out the exact math,
treating each possible case and proving that you have covered all the
cases. Nobody is going to arrive at the right answer by blathering away
in words.
Rick Marken (940807.2110) --

RE: perception by spike interval rather than frequency

What I was asking for was a model of a _control system_ that controls
the perceptual signal that is the output of such a function.

Yes, that's the critical question. Actually, the solution to your
guitar-tuning example is extremely simple. First, make a model in our
usual terms, with all signals represented as frequencies, the input
function being defined as a pitch-to-neural-frequency converter and the
output function as a neural-frequency-to-tuning-peg-angle converter.
Match the model to real behavior and get all the parameters.

Then just redefine all signals and functions in terms of interval, which
is 1/frequency, converting the parameters as appropriate. If the
perceptual signal in our model is 0.5 impulses per second per hertz of
input pitch, then the perceptual signal in the converted model is 2 sec
per impulse per hertz of pitch. The reference signal is given in sec per
impulse, too, so we have 1/error = (1/i*) - (1/i), with error and
interval now expressed in sec per impulse. Now the error signal in the
new units is

error = (i - i*)/(ii*).

The nonlinearity of this equation is compensated by the nonlinearity in
the output function when expressed as tuning angle per spike-interval.
The system equations will be EXACTLY the same, except that they will be
written in a way that involves some unpleasant analytical forms.

There is really no difference at all between the interval representation
and the frequency representation. The underlying physical situation is
precisely the same: a train of impulses. You can describe exactly the
same train in terms of the elapsed time occupied by a given number of
impulses or in terms of the number of impulses occurring in a given
elapsed time. The only possible reason for choosing one over the other
is that the equations might be easier to solve in one form, or one form
might more directly represent the physical operation of a detector or

By the same token, any relationship expressed in terms of spike
intervals can be transformed by elementary means into another expression
in terms of spike frequency -- if you gain anything by doing so. You can
even mix the two in the same model, if you keep track of units.

Even in control theory there are two approaches: the time-domain
approach and the frequency-domain approach. They are entirely
equivalent, although which one you choose may determine whether you can
find an easy way to solve the equations.
Best to all,

Bill P.