Simulations, SD and Control Theory

[From Bob Eberlein 980910.0923 EDT]

Hi Bill,

Thanks for the model. I have rearrranged it to fit more the standard formulation
of SD models. You will see that both matematically and computationally it is

Reformulating the rate:

The problem with doing it there is that you introduce a whole extra
iteration of lag in the Speaking Effort variable. I have changed your pure
integrator into a leaky integrator with gain, a fairly standard kind of
analog computing block that Vensim doesn't have but which is easily
synthesized. The leaky integrator is

Moving the computation out of the level to the rate changes nothing. The way
that we would normally represent a leaky integrator is the one in speak4. The
Level eqaution becomes

Level = INTEG(active changes - leak,initial value)

with leak set to

leak = Level/leaking time

With the model I sent I left the leak as level/(gain*slowing) - concepts which I
only dimly comprehend.

>More importantly this equation gives the behavior the property that in a
>state condition where a person is speaking at the right volume and receiving
no
>signals that a change is required they will slowly decrease their speaking
>effort.

If you look at the reformulated model it should be clear that as long as speaking
effort is positive the leak will be positive. In order to stop the level from
changing this means that the effort adjustment must be positive. This will only
happen when there is a perceived volume discrepency. The model does, of course,
have a steady state. But this steady state occurs when there are constant
environmental signals to speak louder. If we were to turn the model around and
measure quietness instead of loudness the steady state would occur with constants
signals to speak softer. This is what I find troubling.

I am still pretty lost as to the meaning of GAIN and SLOWING. On the one hand
SLOWING increases the effecti effort adjustment reponse which is the same as
increasing "time to adjust effort" and this makes sense. The use in the leak I
don't get. The GAIN I am more lost by but perhaps that is because my formulation
has run to far afield from the normal context in which that term is used.

Bob Eberlein

speak4.mdl (59 Bytes)

[From Bill Powers (980910.1025 MDT)]

Bob Eberlein 980910.0923 EDT--

Thanks for the model. I have rearrranged it to fit more the standard
formulation SD models. You will see that both matematically and
computationally it is identical to your model speak3.

Can we start over? This is getting too complicated for me. Let's try to
boil down the model to its bare essentials, then see if we can express both
of them in some sort of "canonical" form (in each idiom).

First, the confusing terms GAIN and SLOWING. These arose from a particular
way of conceiving an amplifier with a single-pole low-pass filter in it.
The overall effect is that of an integrator which has a leak in it, the
leak being proportional to the integrated value at any time. The output of
this function will rise as the input accumulates, but will come to an
asymptotic value when the input rate equals the leak rate.

You use essentially this scheme in your "Smooth" functions. The way I write
it is

y := y + (gain*x - y)*dt/slow.
(dt is the duration of one iteration in physical time)

Written this way, if x is a step function with any initial value and a
final value of X, y will eventually come to the value gain*X. The slowing
factor "slow" determines the initial slope of the rise from the beginning
value of x toward the final value of x. This form has the handy properties
that "gain" specifies the steady-state output/input ratio, and "slow"
indicates how long it will take for 1/e-th of the change to happen after a
step-change in the input.

Another way to write exactly the same relationship is to say

y := y + k1*x*dt - k2*y*dt

This is a direct expression of the underlying physics: k1 tells how much y
is integrated up in one iteration by the current value of x, and k2 tells
how much y leaks away on each iteration, where the duration of an iteration
in physical time is given by dt. Notice that this translates into the
previous expression if we say

k1 = Gain/Slow, and
k2 = 1/Slow

If x represents a literal flow and y is the cumulative sum of the
increments of flow, then k1 is 1 -- conservation of matter, or charge, or
whatever. But if we're talking about signals in some electronic or neural
system, k1 can be anything; no conservation law is implied. If we integrate
electrical current I in a capacitor to produce a voltage V, the voltage is
inversely proportional to the capacitance C:

V = INTEG(i/C)

If there's a resistor of R ohms across the capacitor, there's a leakage
current of V/R, so we have

V = INTEG(I/C - V/RC)

If you want to keep your INTEG function pure, then the flow variable can
contain all the stuff in parentheses, and the Level variable can just be
written as

Level = INTEG(flow, initial value)

I have no objection to that.

In the Speak models, the flow variable is written as "required effort
adjustment/time to adjust." Some of the mysterious (to me) details of your
model would be explained if you were thinking of control as a process of
computing how much error there is, and then computing how much output is
needed to correct that amount of error. While this approach will yield
control of a sort, it involves a great deal of unnecessary computation
(those simultaneous adjustments of parameters in multiple functions). There
is a much simpler way to do it.

Basically, all you need is a process that will guarantee that the error
(what you call the "gap") gets smaller on every iteration. Of course you
want this to happen as rapidly as possible, but that is also easy to
arrange in simple systems.

In the PCT model, this is done as follows.

First, the physical variable to be controlled is represented by a
perceptual signal inside the control system, via some sort of input sensor.

Second, the perceptual signal is compared with a reference signal that
stands for the desired value of the perceptual signal, and hence of the
controlled variable.

Third, the resulting error signal ("gap") is fed into an output function,
where it is amplified and turned into a physical effect that can alter the
value of the variable to be controlled.

And that is basically all there is to it, details aside.

One of the details concerns how to make sure that even small errors or gaps
lead to large outputs, so the final error will be kept very small. The size
of the output amplification is determined by adjusting the Gain factor, and
the system is kept from going into spontaneous oscillations by adjusting
the Slowing factor, which is essentially the same as your "time to adjust"
factor.

If possible, of course, we would like infinite Gain, because that would
imply zero steady-state error. But we can't have that for two reasons.
First, there are no perfect integrators in nature, especially not in neural
circuits. And second, there are time-delays in real systems, and gain and
speed of response have to be adjusted to achieve stable control, without
runaway or oscillation. There are algorithms for achieving the optimum
parameter adjustments automatically, or they can be achieved by trial and

To illustrate this, I've boiled the Speak model down to a very simple form,
with no extra intervening variables and no attempt to account for why the
"desired volume" is set where it is. Furthermore, I've tried to separate
physical processes in the environment from neural computing processes in
the brain, as shown by the dashed line in Speak5.mdl. Inside the brain, the
signals are all in NSU -- "Nervous System Units," or impulses per second.
I've put the Slow and Gain parameters into the flow variable in the output
function, so the Level variable is as simple as possible.

To show how good the control is, I've included a disturbance that acts to
change the transmission factor from mouth to ear. The transmission factor
is 1 for 40 seconds, then becomes 2 for the next 40 seconds, reverting to 1
after that. The plots show that the "speaking effort" variable decreases
when the external transmission factor increases, keeping the delayed
perceived volume at the intended volume very accurately except for a brief
transient effect at the onset and offset of the disturbance.

I changed the time scale to Seconds and the delay to 0.25 Second, to be in
line with real time-scales and delay times.

I hope that this model comes closer to fitting SD conventions.

The way we use a model like this in PCT is to adjust its parameters until
the behavior of the model fits the observed behavior of a real person as
closely as possible. In the present case, we would put an amplifier between
a microphone and some earphones, so we could give the human being the same
disturbance as the model gets. Then we could record the human being's
output sound energy just as we record the model's output, and adjust the
parameters of the control model (perceptual delay, Gain, and Slowing) for
best fit. I notice that in Vensim Professional there appears to be a way to
do this automatically -- if so, we can have some fun.

If you can model the same situation according to SD conventions, I think
we'll have a basis for going on. I'll reserve Speak6.mdl for you.

Best,

Bill P.

speak5.2mdl (60 Bytes)

[From Bob Eberlein (980910.1600 EDT)]

Hi Bill,

Thanks for the model. I think it has clarified one of the differences in the
way you approach this problem. I have attached speak6.mdl - it highlights
what would be a normal SD formulation for your model - and also shows the
alternative.

I think the important distinction that is shown is that in the absence of an
active error signal (that is an error signal with value 0) your model shuts
down activity. Put more poetically, in the absence of active signals that
something is not perfect all activity stops.

In my formulation in the absence of an active error signal no changes are made.

If I have understood this correctly there is a pretty fundamental difference
between the way an SD modeler would look at a human control process and the PCT
perspective. It is really a question of whether change or action is based on
an error signal. For a SD modeler the goal of volume adjustment is to make
perceived volume match an intended value. The fact that other things might

I apologise for adding in another variable, but part of the approach I believe
in is simple formulas with extra variables if necessary. This is actually
easier for me to understand.

And yes, the higher end versions of Vensim will automatically adjust parameters
so that model behavior will fit data.

I hope this is helpful. I think there really may be a difference in the way we

Bob Eberlein

[From Bob Eberlein (980910.1614)] -

here is the model I forgot to attach to my last post.

Bob Eberlein

speak6.mdl (59 Bytes)

[From Bill Powers (980910.1442 MDT)]

Bob Eberlein (980910.1600 EDT)--

Thanks for the model. I think it has clarified one of the differences in

the

way you approach this problem. I have attached speak6.mdl - it highlights
what would be a normal SD formulation for your model - and also shows the
alternative.

I think the important distinction that is shown is that in the absence of an
active error signal (that is an error signal with value 0) your model shuts
down activity. Put more poetically, in the absence of active signals that
something is not perfect all activity stops.

In my model, activity is continuous -- it never stops unless the reference
signal and all disturbances are zero. Of course if the error _were_ exactly
zero, then any activity would cause the perception to depart from the
reference level and make it nonzero.

But I can't really discuss this without seeing how your models would
actually work. I was puzzled because when I tried to look at the equations,
nothing happened! I didn't realize for a while that the colored boxes were

If you have time, could you actually implement those two models? Maybe I
would understand your "poetic" description better if you did.

Just so you'll know, my model is just the basic servomechanism or negative
feedback control system that engineers have been building for 50 years.

Best,

Bill P.

[From Bill Powers (980910.1455 MDT)]

Bob Eberlein (980910.1600 EDT)--

A double take: you said

In my formulation in the absence of an active error signal no changes are >

This would also be true of my system if the integrator were not leaky. I
use a leaky integrator out of habit -- real integrators, such as those in
the nervous system, are all leaky to some degree, so I almost always use
that formulation. However, as the folks on this list will substantiate,
when in a hurry or not being fussy, I often use a pure integral as the
output function, and a control system with a pure integral output will not
stop adjusting its output until the error is exactly zero. And then it
_will_ stop adjusting its output, provided that any disturbances are
constant and the reference level, or desired value of the controlled
variable, is not changing.

I think our differences are in terminology only; when it comes to the
actual model, as opposed to the conceptual picture of what makes it work,
the operation is the same. I attach Speak7.mdl, in which the leak has been
plugged and the slowing factor is not used. The gain is now a small number,
corresponding to Gain/Slowing in the previous model, or 500/4000 or 0.125.
That number in the numerator, coincidentally, corresponds to the optimum
value of 8 in the denominator for the best "time to adjust" parameter that
you suggested earlier. The performance is indistinguishable from that of
Speak5.mdl.

In Speak5 through 7, the disturbance changes the feedback connection,
briefly doubling the effect of the output sound on the received sound
level. This means that the loop gain temporarily doubles. If the system is
set up with a sufficiently high gain for the fastest possible response when
there is no disturbance, it will oscillate when the disturbance occurs. So
the best setting for the gain is a value that will just avoid oscillations
for the largest disturbance that will occur (i.e., the highest value of the
multiplier of Received Sound Intensity) given the perceptual time delay.

I'm sure there are analytical ways of calculating the best gain (or in your
model, a reciprocal number called "time to adjust"), but in PCT we would
handle that as the action of some higher-level process, not the basic
control loop itself. In the real neural loops that are known, I don't think
there is any evidence for such computations. However the system is brought
toward the best performance, it's more like a slow process of trial and
error. "If oscillations occur, reduce the gain." And also, in PCT, we don't
assume that the real system is optimized. The point is to have parameters
to adjust to match the performance of the model to that of the real person.
While people do approximate optimum behavior in control tasks, they always
fall significantly short of it, enough so that the model is always capable
of better performance than the person. That allows room for adjusting
parameters in both directions about the condition of best fit, to be sure
the prediction error is really minimized.

Best,

Bill P.

speak7.mdl (59 Bytes)

[From Bob Eberlein (19980911.0900 EST)]

Hi Bill,

I have tried to include in the model both the formulation we have been talking
about. There is a switching variable in the model - formulation switch - which
goes between them. To simulate the gain/slowing formulation just simulate. To
simulate my formulation click on the Set icon up top, then click on -formulation
switch- and type in 1. Set the run name to bob and simulate. You will see that
the behavior is quite similar. To see the results of forcing the volume error
to 0 click on the Set icon then click on -error test- and set its value to 1.
If you make a billtest and bobtest (with -formulation switch- also set to 1) in
this manner you can see the difference in behavior.

You will note that I have changed the units to be broken into three sets. One
is physical (watt/SqMeter), one is percepved physical (PWpSqM which stants for
Perceived Watts per Square Meter), and finally Nervous System Units. In this
case the GAIN parameter actually converts from some perception about reality to
the internal processing units. This makes more sense to me since it is hard to
understand why the volume of speach as perceived by a person should be measured
in nervous system units.

Now the question arises whether there is any strong reason to prefer one
formulation over the other. I prefer mine because of the steady state
behavior. In terms of their response to stimuli they are very similar.

Bob Eberlein

speak8.mdl (59 Bytes)

[From Bill Powers (980911.0938 MDT)]

Bob Eberlein (19980911.0900 EST)--

Hi, Bob --

In your model speak8, you've used my Speak5 formulation, in which I used a
Slowing factor and a Gain, and put them inside the INTEG function. In
Speak7, I revised the Speaking Effort box so it just integrated the Effort
Adjustment variable. Unfortunately I called that variable "Output" for some
reason known only to Beelzebub. Also, in deference to your use of a pure
variable equal to Gain*volume error. Eliminating the slowing factor
required adjusting the new Gain factor to 0.125 (equal to the old
Gain/Slowing).

So my gain variable is now exactly the same in effect as your 1/( PERCEIVED
OUTPUT SENSITIVITY/time to adjust effort). In fact my gain variable is
0.125 or 1/8, while your combined factors amount to 1/10. And that is the
ONLY difference between our models, functionally.

Come to think of it, I should have said I was getting rid of the Gain
factor and keeping the Slowing factor, which is in the denominator. Then
the apparent difference between models would have been even less.

In the attached program I have changed the effect of the Formulation Switch
to operate on the Effort Adjustment variable, and altered my side of the
model to reflect Speak7's organization. If you change my Gain variable to
0.1, the two versions of the model behave identically, including the error
test.

Nice method for comparing the models, by the way. Elegant, in fact.

I have some questions about some details of your formulation, but first

You will note that I have changed the units to be broken into three sets.

One

is physical (watt/SqMeter), one is percepved physical (PWpSqM which stants

for

Perceived Watts per Square Meter), and finally Nervous System Units. In this
case the GAIN parameter actually converts from some perception about

reality to

the internal processing units. This makes more sense to me since it is

hard to

understand why the volume of speach as perceived by a person should be

measured

in nervous system units.

The reason is very simple: all the brain can experience exists in NSU, not
physical units. Outside the ear, sound volume is measured in watts/meter^2.
Just inside the boundary formed by sensory receptors, it is measured in
impulses per second -- NSU. There is a conversion factor at the input with
units of NSU/(Watt/Meter^2), and a magnitude that can only be determined
experimentally. But the brain knows nothing of that conversion factor -- it
experiences only the NSU. In fact the conversion is nonlinear, but we can
use a linear approximation for a first model.

So I maintain that _all_ variables in the brain should be expressed in NSU,
with no distinction being made between "physical" NSU and the others.
They're all just trains of nerve impulses, inside the brain.

Now the question arises whether there is any strong reason to prefer one
formulation over the other. I prefer mine because of the steady state
behavior. In terms of their response to stimuli they are very similar.

They're identical, as I think you will see from Speak9, attached. The only
real question is whether to use a pure integrator or a leaky one, and that
depends entirely on which form results in behavior that fits the
observations the best. In fitting models to tracking behavior, I've found
that including a leak allows a slight improvement in the fit. The worst fit
(5% RMS error of prediction or so) comes from a model with a pure
integrator and no delay or leakage. The prediction error is reduced to 3%
adjustable leakage makes the RMS error even slightly smaller -- although, I
admit, not enough to be significant.

I have one question: you show, in your model, a "Perceived output
sensitivity." Since the output sensitivity is a parameter of the model, how
does it get perceived? I don't see why this constant is needed, anyway,
since without it you can get exactly the same result by adjusting "Time to

Best,

Bill P.

speak9.mdl (59 Bytes)

[From Bob Eberlein (19980911.1400 EDT)]

Hi Bill,

Thanks for the model. To get units to match we need to make the units for GAIN
equal to NSU/PWpSqM/Second - so that having both GAIN and SLOW might not be so

I disagree with you on the units of measure issue. While it is true that what
goes on inside the brain is electrical and chemical, the way that a person
responds to stimuli has embodied in it some sort of model of the external
environment. When I am driving I look at my speedometer to see what it says. I
then compare that number to what I think is posted and what I think is
reasonable. I do this all in miles/hour or sometimes kilometers/hour. I don't do
it in feet/second (some people might) and I don't do it in firings/second or PH
Balance (I doubt that anyone does).

Thus the introduction of -Perceived Output Sensitivity-. Doing something like
this is second nature for me because it is the analogue of productivity in a
workforce. If you want to determine how many people to have working look at how
much you want to produce and divide by productivity. I made this the same as the
actual output sensitivity and left it at that. It is true that for this particular
model it only changes the adjustment speed but this type of concept can have
different impacts. In the classic SD Workforce-Inventory model (see wfinv3.mdl
which is a slight modification f wfinv2.mdl from the PLE users guide chapter 2) -
setting Perceived productivity to 2 give a completely different type of behavior
with a constant inventory shortage. For more complex models the changes can be
even more complete.

Now there is a pretty deep question about whether improving fit is the same as
improving the quality of a model. If we define quality as the ability to predict
what will happen given a very broad range of stimuli I will assert that these
aren't the same. However, I think this is probably another debate altogether.

I will say that once a model is up and running passing it against data and making
parametric adjustments to accomplish this is something I do all the time so I have
don't see anything wrong with it.

Bob Eberlein

wfinv3.mdl (59 Bytes)

[From Rick Marken (980911.2030)]

Bob Eberlein (19980911.1400 EDT)--

I disagree with you [Bill P.] on the units of measure issue.
While it is true that what goes on inside the brain is electrical
and chemical, the way that a person responds to stimuli has
embodied in it some sort of model of the external environment.

People don't respond to stimuli; they control perceptions.
Fortuntely, control systems don't need a model of the external
environment in order to be able to act appropriately to keep
perceptual variables under control.

When I am driving I look at my speedometer to see what it says.
I then compare that number to what I think is posted and what I
think is reasonable. I do this all in miles/hour or sometimes
kilometers/hour...

What you are doing there is comparing the neural firing rate
that represents the speedometer reading to the neural firing
rate that represents the number on the sign. You can't see

Thus the introduction of -Perceived Output Sensitivity-.

We don't introduce this kind of thing in PCT because we try
to keep the modeller out of the operation of the model. There
is no way for the control system itself to do what your "Perceived
Output Sensitivity" function does. A control system cannot
measure the relationship between the neural firing rate that
represents some physical variable and some measure of that same
variable. A system (like you) that is _outside_ the control
system can do this, but the control system itself can't do it and
(fortunately) doesn't need to.

Now there is a pretty deep question about whether improving fit
is the same as improving the quality of a model. If we define
quality as the ability to predict what will happen given a
very broad range of stimuli I will assert that these aren't
the same.

I think it depends on why you are doing the modeling. If you are
building models as part of the process of trying to understand a
natural phenomenon (like human behavior) then the quality of the
model is evaluated in terms of how well it predicts appropriately
collected data. If you are building models to meet certain
performance specs (like "AI" systems), then the quality of the
model is evlauated in terms of its ability to meet the specs.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313

[From Bill Powers (980912.0203 MDT)]

Bob Eberlein (19980911.1400 EDT)--

Thanks for the model. To get units to match we need to make the units for

GAIN >equal to NSU/PWpSqM/Second - so that having both GAIN and SLOW might

Some points that you understand but not all onlookers might be familiar with:

The only point of getting the units right is to make sure we are being
consistent with our own definitions of the quantities involved. If we have
advertised that for every tenth roll of film developed we are giving away
two 8 x 10 prints, then the number of 8 x 10 prints we must provide is 2/10
or 0.2 prints/roll. The constant relating number of prints to number of
rolls must have units of prints per roll to make the computation come out
right. If you divide 5 rolls by 0.2 prints per roll, you get a shocking
number of prints you have to give away, 25. But the units (when we treat
them as algebraic symbols) come out rolls/(prints/roll) or rolls squared
per print, which makes no sense at all -- we're trying to compute the
number of prints, so the units should come out as prints, not square prints
per roll. The only way to do that is to _multiply_ the number of rolls, 5,
by the constant 0.2 prints/roll, to get 1 print, which is more like it. In
the "algebraic" expression rolls times (prints / rolls), the rolls cancel
out, leaving only prints, the units we expected to get. The units tell us
that we're doing the calculation wrong; dividing instead of multiplying.

So keeping track of units is a powerful way to be sure that your
computations are being done in a self-consistent way. But you can make the
units of constants anything you like, even NSU/PWpSqM/Second, if that's
what it takes to be consistent with your definitions.

I disagree with you on the units of measure issue. While it is true that

what

goes on inside the brain is electrical and chemical, the way that a person
responds to stimuli has embodied in it some sort of model of the external
environment.

I'm not going to rise to the bait of "responds to stimuli" again until
you're read up on my theory of behavior. Nor is there much point in
discussing "some kind of model embodied in it," unless you're prepared to
say WHAT kind. If, after that, you still want to talk about responding to
stimuli, we can have a Calvin and Hobbes fight, but let's not do that now
when you're familiar with only one side of the argument.

A more neutral way to deal with this issue is by thinking about the
principles of analog computing. When you set up an analog computer to
simulate the response of a mass to an applied force, you patch together two
integrators. The voltage applied to the input of the first integrator is
declared to represent the applied force, at (let us say) 1 volt per newton
of force. Dividing this voltage by a constant before it's applied makes the
input into an acceleration, if the constant is related to mass by the
proper calibration factor. The voltage at the output of that integrator is
declared to represent velocity, at 1 volt per (meter per second). This
voltage becomes the input to the next integrator, whose output is declared
to represent position, at 1 volt per meter. The internal parameters of the
integrators are adjusted so that 1 volt of input to the first integrator,
applied for 1 millisecond, will cause the output voltage of that integrator
to rise to 1 volt. This establishes the time scale at 1 millisecond of
simulated time = 1 second of real time. The second integrator is similarly

Clearly there are some unit conversions going on here, and we have to
understand
them to translate the behavior of the voltages in the analog computer back
into corresponding physical quantities in a physical system. But _the
analog computer does not have to do this translation_. All it has to do is
operate on input voltages to produce output voltages. It's only the _user_
of the analog computer who has to assign symbols to the voltages in order
to understand them.

The same applies to the brain. An analog computer deals only with voltages
and currents, not forces, velocities, and accelerations. Similarly, the
brain deals only with trains of neural impulses which are passed through
various computing functions to create output signals out of operations on
input signals. It can't deal directly with the physical environment.

There is, of course, one big difference: there is no "user" of the brain
except other parts of the same brain. The brain acts not only like an
analog computer, but as a symbol-handling device which, at a higher level
of function, can assign symbols to name signals just as the user of an
analog computer can assign units to name voltages. In fact, without this
sort of naming of signals, the brain could function like an analog computer
but there would be no cognitive understanding of what it was experiencing
and doing. All neural signals are alike, just as all voltages are alike.

There is another big difference. The user of an electronic analog computer
can look at the voltages inside the computer and at the physical variables
outside it that the voltages are supposed to represent, and adjust the
calibration of ths system accordingly. So it's possible to assign scaling
factors to convert from the "real" values to the "modeled" values.

But the cognitive "user" of the analog systems of a brain can see only the
signals. They can be calibrated against each other, but there is no way to
calibrate them against the external physical reality. The best that can be
done is to set up a simulation that _might_ represent the behavior of the
external reality (one name for this simulation is "physics"), and then
calibrate the other neural signals against this simulation. And that's
basically what we have to do, as cognitive entities living inside a brain.

When I am driving I look at my speedometer to see what it says. I
then compare that number to what I think is posted and what I think is
reasonable. I do this all in miles/hour or sometimes kilometers/hour. I

don't >do it in feet/second (some people might) and I don't do it in
firings/second or >PH Balance (I doubt that anyone does).

I agree. We name the experiences provided to us by our senses in the form
of neural signals. That's how we give these anonymous trains of neural
impulses meanings. Only as theorists do we have any interest in "neural
signals." To the occupant of a brain, neural signals ARE reality.

But unless you want to try to include this naming process in your model of
the brain (some day, perhaps, but it's a little early for that), I think
it's better to take the point of view of the user of the analog computer:
what's inside the computer is dealt with in terms of computer variables,
and what's outside it in terms of physical variables, and we just have to
keep track of the (proposed) correspondences.

Of course we can do this by giving the signals the same names as the
physical quantities; if there's a signal representing light intensity, we
can call it the light intensity signal. But its units are not photons per
second or foot-lamberts; they are, literally, impulses per second.

Yes, there is some sort of model of the world in the brain. We call it
perception. And it is composed, physically, of neural impulses -- the
"voltages" of the brain.
To us, it is the only world we can experience.

Thus the introduction of -Perceived Output Sensitivity-. Doing something

like

this is second nature for me because it is the analogue of productivity in a
workforce.

But that's not my objection. My objection is that if you want to have a
perception of a parameter of some part of the model, you have to explain
how it is perceived. As far as I know, there are no sensory receptors that
can detect the sensitivity of a muscle to excitatory signals, in kilograms
of force per (impulse per second).

If you want to model some objective process outside the brain, this is no
problem. You can propose that someone has a way of measuring both man-hours
and production, and from this derive the value of a "productivity" constant
(or variable). That calculation can be put into the model. Or you can
propose that productivity is an independent variable to be set from outside
the simulation.

However, the particular model we're talking about, Speakx.mdl, is not
outside the brain: it exists largely inside a brain. So we have to stick to
the rules for modeling brains: all the variables inside a brain are in NSU.
While they may depend, via sensory calibrations, on external variables
measured in physical units, they are not in 1-to-1 correspondence with the
physical variables. They are literally in different units.

Also they are in different places. In particular, the "desired value" of a
perceived variable is not in the environment; it is synthesized by the
brain. And the "error" or "gap" exists ONLY in the brain. Perception only
describes the apparent state of the environment, not the desired or
intended or required or necessary state. And certainly not the ACTUAL state
of the environment, as physics describes it.

Now there is a pretty deep question about whether improving fit is the

same as

improving the quality of a model. If we define quality as the ability to

predict

what will happen given a very broad range of stimuli I will assert that these
aren't the same. However, I think this is probably another debate

altogether.

Yes it is, and I think it is possible to achieve both accuracy of
prediction and broadness of applicability. But only using control theory,
not stimulus-response theory.

I will say that once a model is up and running passing it against data and

making

parametric adjustments to accomplish this is something I do all the time

so I >have don't see anything wrong with it.

Well, good! There is too much modeling going around that is based on
assumptions that are never tested against data. As you are fully aware,
what any model does by way of visible behavior is acutely sensitive to the
parameter settings, not to mention picking the right parameters. The
parameters determine whether you have a control system or an oscillator.

Best,

Bill P.