[From Bill Powers (940804.0830 MDT)]
Peter Cariani (9403.2233) --
You have some good arguments there, good enough to take me aback and
make me consider things more deeply.
My basic reason for preferring the frequency representation is not valid
in all cases. The primary case for which the frequency interpretation is
valid is for neurons in which output impulses are not directly
synchronized with input impulses. The biochemistry is such that jolts of
neurotranmitter enter the dendrite or cell body and diffuse toward the
signal-generating region, where the average concentration raises and
lowers the threshold for recovery after a discharge. The rate at which
output impulses are generated depends on the relationship between the
chemically-varied potential inside the cell and the time it takes for
the ion pumps to bring the cell back toward the initial conditions after
an output impulse (a relaxation oscillator). Thus we have a frequency-
to-concentration-to-frequency converter. A great many computations can
be done by the way the chemical effects of neurotransmitter jolts
combine and diffuse inside the cell body, on their way toward the axon
hillock. But because of the analog nature of the chemical
concentrations, these must be primarily analog computations.
However, there are other kinds of neurons, particularly the "electrical"
ones where input impulses do produce synchronized output impulses. I
know very little about these -- where they occur, whether they are
associated with particular modalities of sensation, etc.. In fact what I
have said just about exhausts my meager store of knowledge about how
neurons work.
Considering the analog case at one extreme and the electrical case at
the other, it seems plausible that the difference is a matter of the
time-constants of diffusion inside the cell body. If the effect of a
neurotransmitter inside the cell body diffuses away rapidly in
comparison with the duration of an impulse, then there can be no
averaging effects and any output impulses would necessarily have to be
synchronized with input impulses in ratios of small whole numbers
(including 1:1). I don't know what sorts of computations would result,
but they might be appropriate to some phenomena of audition, such as the
perception of harmonics and octaves. Also, this computer being evolved
and not designed, there are hybrid possibilities in which there is an
underlying analog relationship between input and output frequencies, but
where for certain input-output frequency ratios there are suppression or
resonance effects due to rapid fluctuations in the envelope of the
chemical concentrations. Neither the analog computer nor the digital
computer would be a good model for that case. About the only way I know
of to explore the "logic" of such systems would be through simulation.
On top of this we have what could be called a second universe of
computation, the one that results from the way neural signals are routed
and combined. Given all the intraneuronal computations, we have a
collection of elementary computing blocks which then can be combined
into larger computing blocks in which connectivity plays the major role
in determining what computation gets done. The relationship between
neurons and neural nets is then just like the relationship between
electronic components and circuits. Each electronic component performs
some fixed kind of computation which we describe as resistance,
capacitance, inductance, amplification, and so forth. But when we wire
these components together in various ways we get television sets,
radios, sonar, garage-door openers, computers, and the whole schmear.
Clearly, the system properties we see depend on which universe of
computation we're considering. PCT is concerned with the universe in
which we see TV sets, radios, and so on. Knowing the nature of the
underlying building blocks is not necessary, even though those building
blocks constitute a deeper level of explanation of how the circuits
work.
So in PCT it basically doesn't matter whether we think of a neural
signal as a variable frequency or as variable intervals between
impulses. I still prefer the frequency model because it seems that in
most cases, experience is concerned with magnitudes of things, how much
of something is present in perception, and howmuchness seems to vary in
relation to physical stimuli in the way that frequency does. However,
that is just my preference, and doesn't invalidate any observations of
relationships between intervals. I would like to point out, however,
that it would be difficult to find any subjective phenomenon that can
vary on the time scale of the interval between impulses in a normal
neural signal. When you get into very long intervals, there is really no
way to tell whether the interval itself is being apprehended directly
(something I have difficulty imagining) or whether it is being detected
and converted to a variable-frequency signal, a low frequency indicating
a long interval.
As to "adequate stimuli", this is really not a deciding kind of evidence
about anything. There are many different adequate stimuli for most
perceptions. A current of one milliampere is an adequate stimulus for
the perception of pain, temperature, sound, light, effort, taste, smell,
nausea, pleasure, or anything else you care to name. Even when temporal
patterns of electrical stimulation prove to correlate with perceptions
of taste or color, there is no reason to think that the temporal pattern
itself is being detected, or that normal perception involves temporal
patterns of stimulation. In the lower universe of computation, the
temporal pattern of an electrical stimulus could easily be converted
into simultaneous patterns of nerve signals arriving at different places
by different routes and producing steady signals of different amplitudes
in different places, effectively in parallel.
You say "... time patterns yield much, much higher quality
representations than average rate measures and, more importantly, the
way that they change corresponds much better to the observed
psychophysics..."
This is not a persuasive argument for me. You could stimulate a touch
receptor with a high-frequency pattern of pressures, in the hundreds of
hertz, and show that the spike intervals leaving the touch receptor
contained a detectable representation of the pattern. Yet for the rest
of the brain, only a steady pressure would exist (this is a technique
used in virtual reality systems for simulating touch). The
psychophysics, in that case, gives a false picture of the perceptual
significance of the signal: the fine structure is perceptually and
behaviorally meaningless.
Also, I think you are failing to apply the same criterion to interval
measures that you apply to frequency measures. It's true that there is
no such thing as "instantaneous frequency" as I put it. But neither is
there any such thing as "instantaneous interval." It's the same problem
in either case: you need more than one impulse to define either
frequency or interval. If we choose interval as the measure, then any
measurement of interval will contain some jitter due to noise, and it
will be necessary to average over some number of impulse pairs to detect
the "actual" interval, the one with a psychophysical correlate. The same
is true for frequency, and to exactly the same extent (f = 1/i, after
all). In the absence of jitter, two impulses are sufficient to establish
either frequency or interval; they are the same thing looked at from
different standpoints.
What we both need, and not not likely to get soon, is a survey of the
building blocks of the central nervous system. I would like to see the
input-output transfer functions of neurons in terms of frequency inputs
and outputs. You would like to see the same transfer functions expressed
in terms of interval. To get this, we would have to find a neurologist
who could be persuaded that there is such a thing as a transfer function
and who wanted to find out what it is. And who would be willing to spend
the years required to make the measurements.
If we had those transfer functions, it wouldn't matter whether we
expressed them in terms of i or 1/i. We would be talking about the same
thing.
ยทยทยท
--------------------------
I think we've argued about multiplexing before. Unless the concept is
VERY carefully defined, it creates more problems than it solves.
In electronics (where the term originated), multiplexing is done by
sampling a number of input signals sequentially, and transmitting the
samples in a fixed order down a single channel.
Once the signal is travelling down the channel, there is no way to tell
whether it is a multiplexed signal or simply a single complex waveform
from a single source. In some multiplexing schemes, there is a second
channel that contains synchronizing information; in others, there are
gaps or reversals or other special signals that identify the start of a
group of samples. In any case, the receiver of the signal must be able
to sort out the individual samples and send them into individual
channels again, reconstructing the original signals. The receiver must
know which sample goes with which signal, and exactly when to switch the
single input channel to the correct output channel. This process is
called demultiplexing.
The ONLY thing that is gained from multiplexing and demultiplexing is a
saving on the number of wires connecting the sources to their respective
destinations. There is always a loss of bandwidth and an increase in
noise, and the complexity of the circuitry is always much greater than
for a simple parallel connection. I would think that in the nervous
system there is no pressing need for saving on the number of wires, and
a much greater need to accomplish signal transmission with a minimum of
otherwise useless computations.
Multiplexing is a very different concept from extracting different kinds
of information from a single-source signal. The information extracted
from a signal depends on the kind of receiver that is used, and there is
no reason why several different kinds of receivers can't receive copies
of the same signal. One receiver of a signal from a microphone might
rectify and smooth the waves and generate an output representing the
amplitude of the signal. Another might do the same, but after passing
the signal through a high-pass filter: its output might represent the
amount of high-frequency noise. Still another might pass the signal
through a tuned filter and output a signal representing the amplitude in
a single narrow band of frequencies. And another yet might square the
fluctuations before averaging them and output a signal representing
signal power.
In PCT this concept is carried even further. The input function of a
generalized control system receives many signals from different sources
at the same time. It combines these signals by performing operations on
them that amount to evaluating a function of which the inputs are
arguments. The very same set of signals can enter many different input
functions at the same time, and each input function can "extract" a
different function of the set of signals. The values of these different
output signals are treated and experienced as different aspects of the
environment from which the input signals came.
But those "aspects" of the environment have no necessary physical
meaning. They are defined strictly by the form of the input function
that is applied to the signals. There is no limit on the number of
different input functions that might be applied, and each one would show
a new aspect of the environment, slightly or greatly different from the
others, coexisting in the same set of input signals. An external
observer, of course, would have to apply the same functions to signals
representing the same parts of the environment to understand what the
input function was seeing -- but then the observer would agree that this
aspect of the environment can, indeed, be seen.
This concept puts a new light on "psychophysics." What psychophysics
does is to compare one way of perceiving the environment (through
instruments and eyes) with another way of perceiving it (through
instruments measuring the outputs of neural input functions, or through
subjective reports about the outputs). This makes it clear that
psychophysics contains an arbitrary and subjective component, the
observer's choice of a way of measuring the "actual" situation. So
psychophysical measurements are by no means the end of the story.
----------------------------------------------------------------------
Bill Leach (940803.2106 EDT) --
I have a double-whammy apology to make: first to Martin Taylor for
incorrectly correcting his comment on pull-only systems(just cancel
whatever I said), and second for failing to see that you were right and
that NEITHER Martin nor I was right, not once but several times.
You said that if reference signals are set to zero for a pair of opposed
one-way control systems of the standard design, there will be no output
and thus no resistance to any disturbance. That is quite correct. If the
reference signal is zero, the only other active input to the comparator
is an inhibitory signal, so there is no way there can ever be any error
signal to drive the output. If this is true on both sides of the opposed
pair, that is if both sides receive a zero reference signal, then
neither side will ever produce an error signal no matter what the
perceptual signals do. Thus any disturbance, even if it alters the
controlled quantity, will be unresisted. The perceptual signals will
change, but as they are both inhibitory nothing will happen. Turning
both reference signals to zero is equivalent to turning both control
systems off.
There is no simple and universal way to decide what effects one-way
control systems will have when combined. In our canonical diagram, the
reference signal is represented as excitatory and the perceptual signal
as inhibitory. But the opposite signs are just as feasible -- and in
fact the opposite signs are found in second-order control systems in the
brain stem. The comparators for such control systems seem to be located
in a layer of the motor nuclei. Collaterals from (copies of) upgoing
sensory signals cross over from the sensory tracts to the motor nuclei
where they synapse uniformly with an _excitatory_ effect. The signals
entering the same layer of the motor nucleus from higher systems have an
_inhibitory_ effect. So the outcome is still an error signal -- but the
signs are upside down relative to the canonical model. This means that
with zero reference signal entering the comparators, a disturbance of a
controlled quantity will lead to positive perceptions which will create
positive error signals, and the control systems will be active. The only
way to turn opposing pairs off is to supply both of them with _large_
reference signals, large enough to completely inhibit the effects of the
largest possible perceptual signals.
If the reference signals from higher systems are lost, the second-order
control systems will be controlling -- actively -- for zero perceptual
signal. In other words, "reflexes" will be seen, but the system will be
actively controlling for a state of zero perceptual signal. What we will
observe, therefore, is the meaning of zero perceptual signal. If I am
not mistaken, what we get is "decerebrate rigidity" with the limbs fully
extended and very resistant to disturbance. You can set a decerebrate
cat on its feet and use it for a doorstop (yuk, sorry). Zero perception,
evidently, goes with the state of full extension.
What both Martin and I failed to do was to simulate the system before
opening our mouths. Simcon 4.5 can do this just fine.
Obviously one can get one's self quite turned inside out with this
stuff and in this case it is quite easy by just looking at the idea
that Pa = -Pb (and again vice versa) [ever work with old Honeywell
logic?].
That's why we have to use simulations. It's amazing how we can fail to
grasp the real operation of even very simple systems. Like the old joke
in which you ask a person if he can add 1 and 1, and when he says yes
you say "All right, how much is 1 and 1 and 1 and 1 and 1 and 1 and 1
and 1 and 1 ..."
-------------------------
On disturbances, generally OK.
But, "beating a dead horse", we are interested in systems where control
is not good too.
Our models are actually capable of much better control than the human
subjects in tracking experiments. If we make the disturbance too easy
(slow and small), the subjects track so nearly perfectly that even a
simple proportional control model with no delays fits just as well as
the more complex integrating model with delays. We usually try to pick
disturbances difficult enough to produce tracking errors of about 10%
RMS (which feels like pretty bad tracking). This is where we get the
best fits of the model to the data. The model's performance has to be
DEoptimized to fit the real behavior as well as possible. We have to
have imperfect tracking to allow evaluating the parameters accurately.
If control is too bad, it becomes difficult to characterize. There are
too many possible reasons for really bad or absent control, starting
with a subject who is just fooling around to see what will happen.
-------------------------
I think I am having a "first cause" crisis!
There, there, we have all had it. The solution is very simple. There are
levels of control in the brain. The brain is not infinite; therefore
there is a highest level. At the highest level, there are no other
higher control systems to manipulate the reference signals; we have run
out of brain. As a consequence, the highest level of reference signals
must be set in some way that doesn't depend on having still-higher
systems. During development, the highest level is lower than the highest
level existing in an adult. Therefore we need to postulate mechanisms
for setting the highest level of reference signals no matter what the
highest level happens to be at a given time.
The highest reference signals may come from memories of past perceptions
at the same level. They may be set at random by reorganization. They may
be zero (zero has a meaning with respect to perceptions). Some may be
inherited.
In short, there may be a problem here, but the answer isn't infinite
regress. If you try to regress higher than the highest level, you'll
find yourself looking at the inside of the skull.
I guess the difficulty I have with this in terms of "free will" is that
I don't see a mechanism that is not deterministic for the choice.
Neither do I, other than the randomness of reorganization which isn't
much comfort to those who want to lay blame. Free will by definition has
to be unsystematic, so about the only situations in which it can have
any effect are those where something new is being created for no reason
and without justification. Maybe that's all we really need to satisfy
the sense of being in charge here.
I guess I am wrestling with the idea that "deterministic" does not mean
"predestined". Indeed, "deterministic", "predestined" and "free will"
as dealt with (and defined) in philosophy quite possibly are
meaningless terms when behaviour is seen as control of perception.
I think that's getting close to the right answer. Predestination and
absolute determinism have been laid to rest by the discovery of Chaos.
Only purpose remains as an explanation of behavior that can create
regular outcomes in a chaotic universe. And even that is constrained, or
so it seems, by our inherited requirements for staying alive and
functioning. If this were an easy problem we could read the answer on
clay tablets in a museum.
-----------------------------------------------------------------------
Best to all,
Bill P.