[From Bill Powers (961127.0530 MST)]
Chris Cherpas (961126.1613 PST)--
Goal 2000:
Shouldn't the time-tag for CSGNet posts use a 4-digit year (e.g., 19961126)?
There won't be any ambiguity until, say, 500101 when the first computers
were built, or maybe even 60 or 70. But go ahead if you want -- it's just a
convention that some use and some don't.
Theories:
Constructs such as functions and mechanisms presumably involve different
organizations for controlling those two kinds of perceptions by scientists.
It's very confusing to me when some people use the term "function" without
meaning "mathematical function." To me, a mechanistic explanation is one
that involves analyzing a system into mathematical functions in which one
variable depends on one or more other variables, and then solving the
resulting system of equations for the variables of interest. This is why we
refer, in PCT, to an input function, a comparator or comparison function, an
output function, an environmental feedback function, and a disturbance
function. We mean, literally, _mathematical functions_, mathematical
expressions that state how the value of a function depends on the values of
the arguments. These functions are descriptions of the way one physical
variable depends on other physical variables; they are each presumably
descriptions of the operation of some kind of neuromechanical device or
subsystem, the operation of which is approximated by the mathematical function.
To me, a functional description means first observing how some variables in
the real system depend on other variables, and then writing equations that
express the dependencies mathematically, as nearly as possible. Doing this
reveals whether you have a complete description of the system. Every
variable must be either an independent variable which can be set freely to
arbitrary values, or a dependent variable that is a function of one or more
other variables in the system (including independent variables). There is no
way to solve the system of equations for any one variable until the
description is complete. For every variable, there must be an expression
which states all the variables on which its value depends, and every such
expression must be an independent equation. There must be as many
independent equations as there are system variables, with the values of
independent variables being the only exceptions (since they are functions of
unknown variables outside the system).
These aren't arbitrary rules; they're simply what is necessary to obtain a
complete analysis of a system. These rules are followed in every hard
science and in every field of engineering. They are simply the rules of
mathematical reasoning about physical systems.
Models/simulations improve our control for perceptions of systems (qua
systems) in ways that unconnected functional relations cannot. Having a
a large supply of functional relations available which sometimes appear
to contradict, at others to be independent, and still others to connect
would seem to provide the errors for eventually stabilizing the
higher-order perceptions.
My tolerance for loose verbalisms,even my own, is at a low ebb after the
long ongoing go-around with Bruce Abbott. Errors are not needed to stabilize
perceptions. Stability is a technical term indicating whether the variables
in a system reach a steady-state condition when undisturbed, or diverge
indefinitely. I do know what you mean, though.
As I said above, analyzing a physical system requires first reducing it to
unconnected functional relations, literally unconnected in that each
relation must be independent of all the others. To reconnect the system, you
note which variables are common to different functional expressions, the
output of one function being one of the inputs to other functions. That step
makes the _structure_ of the system obvious. If some functional relations
contradict others, this reveals a mistake in the analysis or contradictory
premises; the contradiction has to be removed before you can go on.
The PCT analysis of scientists' learning and controlling for the perception
of functions, I would think, is part of doing the analysis for mechanisms,
although one need not assume that the path always goes from the former
to the latter.
What I have been doing above is showing how a scientist learns and controls
for an understanding of systems. This method has been developed and handed
down for many generations. The lack of any alternative that works at all
suggests that this is a discovery about fundamental workings of the human
mind and of nature. Just what aspects of brain organization it reveals is
uncertain; we don't even understand how it is that we can do mathematics.
By the way: why isn't something like McClelland's
collective control notions applied to within-organism relationships
between control systems?
Yes, the theory of conflict within the organism was the first application of
these ideas (chapter 17, B:CP).
Skinner's words versus math models:
When we don't have the models, we fall back on words. So?
So disaster if it's done subjectively and carelessly. In a mathematical
analysis, one of the primary requirements is that a symbol have one and only
one meaning, and that this meaning remain exactly the same throughout the
mathematical development. You could say that this is the heart of
mathematical thinking. When you try to reason in words, you become subject
to word associations which shift the meanings of symbols without any
announcement of the change. You can use the word "function" to mean a
relation among variables, and ten minutes later be using "function" to mean
"the use to which something can be put." And you can think you're still
talking about the same thing.
The test is to see whether you can substitute the original definition of a
term into a later sentence containing it without changing the meaning of the
sentence.
Verbal reasoning is useful only for the shortest and most elementary chains
of reasoning. When relationships become the least bit complicated, and
especially when they involve more than simple short sequential chains of
events, verbal reasoning fails utterly. That's why it's so hard to convey
how even the basic control system unit, with its loop of simultaneous causes
and effects, works. And this is why it's so easy for people to latch onto
the words used to describe a control system and start free-associating on
them, using them in inappropriate ways and ways that actually contradict
their intended meanings. This is how we get off into pointless arguments
about what "control" really is. In PCT, control is used to designate the
behavior of a specific kind of organization, the behavior implied by the
basic PCT diagram. That is all it means and it always means exactly the same
thing. If someone starts talking about other kinds of organizations and
saying that they, too, control, then they aren't talking about the same
phenomenon any more, even though they're using the same symbol.
It's possible to use words in a quasi-mathematical way, with better results.
This comes about by attaching verbal definitions to relations that are
basically defined mathematically, and then always using words according to
exactly those definitions, without branching off into connotations and
associations -- puns. But to do this requires a kind of self-discipline that
is seldom taught and even more seldom learned.
The process
of building a model involves more words than we want to admit, but there
they are.
And caveat emptor.
Mechanisms make good representations we can control and we
thereby gain better control of ever-growing numbers of less abstract
variables, but we seem to need a lot of words to get from here to there
whenever there's the slightest doubt. But let's try enforcing a rule on
CSGNet that nobody can post words anymore, just math models. Sound
feasible?
It's not words that are the problem so much as the way we use words. Do we
try to look past the words at the meanings they indicate, or do we just
groove on their sounds and associations? This is what separates the serious
thinkers from the dilettantes.
Schedules of rft:
Bill: interresponse-time distributions have been around since rat droppings
and may be more interest (but, Darth, you tell us!);
So let's use them instead of eyeballing "cumulative records," which couldn't
be better designed if Rorschach had invented them.
[personal, no dougt tedious, story:]
I put pigeons on concurrent linear-VI schedules where the combined
rate of food per session would not depend on how they distributed their
time between the concurrent alternatives, except in the most extreme cases,
which you never empirically observe.
The problem with VI schedules (or V- schedules in general) is that they
deliberately introduce noise that is larger than the signal. Feedback
effects convey this noise to every variable in the system. And the noise
creates the appearance that behavior has changed, while simultaneously
making it very difficult to determine the system characteristics.
Changing relative rates of reinforcement was accompanied by systematic
changes in the distribution of time across alternatives. The rates of
changeover responses (i.e., switching from one side to the other) changed
systematically and allowed me to get around much of the topping-out effect
of response rates on single schedules. People like the evil Herrnstein
and the benign Vaughan did lots of experiments before me showing that
the rate of changing-over can vary sensitively in relation to relative
reinforcement rates (some folks even looked at changes in the distributions
of inter-changeover times).
That didn't solve lots of bigger problems, but at least we can (as
I did) record (including inter-changeover response distributions)
and plot some within-session ("dynamic") changes when the schedule
pair changes; systematicity in apparently dynamic changes is what
the EABer controls for and deeply loves.
I'm all for systematicity. However, in a situation deliberately made
complex, there is always the risk that it is induced by the apparatus or
some trivial physical effect in a way that isn't obvious. Consider Bruce
Abbott's treatment of the systematic effect of ratio on rate of bar
pressing. A mathematical analysis showed that when the right constant
collection time is assumed, the effect of ratio on rate of pressing
disappears entirely. Later, in analyzing Staddon's cyclic-ratio data in a
properly-instrumented experiment of his own, he was able to measure both
running response rates and collection times, and to show unequivocally that
response rate is essentially constant over all ratios.
With respect to the matching law, it's possible to show that on fixed-ratio
schedules this law amounts exactly to a statement that the ratios on the two
alternatives are identical. When the schedules are not identical, the
matching law is simply false, impossible. The matching law is equivalent to
the statement that 2 = 3 when the ratios are not the same.
If you now go to a variable-ratio concurrent schedule, exactly the same
consideration holds, but now there is a random variable added to everything
which fuzzes out the relationships. Nevertheless, the matching law is still
true only if the schedules are identical.
What if we now go to interval and variable-interval schedules? I don't know
the answer to that; it gets harder to express the matching law, and the
introduction of variabilities with different distributions makes the task
even harder. All this means is that if the matching law for interval
schedules is as trivial as it is for ratio schedules, we will be longer in
discovering this, and will waste more time musing about its implications. I
should think that a wise researcher would invest considerable time in
working out the mathematical expression of the matching law for fixed and
variable interval schedules, to see if it's worth drawing any conclusions
from it.
Schedules are not the panacea that many in EAB hoped, and some
still, hope for but the systematicity is not necessarily an illusion.
Systematicity itself, if properly observed, is never an illusion; the only
illusion that can be involved is in the impression that it is a signficant
systematicity when it may simply be a tautology, equivalent to finding that
2 = 2.
Studying concurrency may lead to studying hierachically organized
concurrency, and eventually, PCT?...nah!
Yes, I think that is true. In concurrent schedules, there are two output
variables: the amount of behavior (rate) and the _location_ of behavior. The
system controlling intake can now vary two output variables (bu using two
lower control systems) instead of only one as a way of maintaining the
desired intake. Actually the location variable is present even in
single-schedule experiments, but only one of the locations is instrumented
to record the organism's actions, and only one is associated with a source
of the controlled variable (an official one, anyway). However, the
higher-order system involved still varies the location variable when
behavior at an achievable rate fails to bring in enough food or whatever is
being used as a "reinforcer." Since this necessarily affects the responses
being made in the instrumented location, the result is an apparent change in
the rate of behavior. But it is not the rate of behavior that is changing so
much as its location.
Fretless base
That's better than a baseless fret.
Best,
Bill P.