Welcome Phil Runkel!; Op Cond programs;mind and brain; video cards

[From Bill Powers (941017.1030 MDT)]

The good news is that Phil Runkel is on CSG-L. There isn't any bad news.
Welcome aboard, Brother Phil.

···

-----------------------------------------------------------------------
Bruce Abbott (941016.1530 EST)--

I've actually WRITTEN programs that execute VI (and other) schedules
of reinforcement and collect the data, so if you like we can use one of
those as the basis for the environmental side of the simulation.

Wonderful. I'd like to see the programs. I can read the source code if
you email it to me (and Rick) or put it on the net.

Actual schedules usually provide reinforcement on a "constant
probability" basis: in theory the probability of a reinforcer becoming
"set up" in a given interval is constant regardless of the number of
intervals since last reinforcer delivery.

...

One way to create a constant-probability schedule is to sample a random
number (say, between 0 and 1) at equal time-intervals and set up the
reinforcer if the number is smaller than some criterion value, K. For
example, if the computation were being performed once each second,
setting K = 1/60 = .017 will provide one reinforcement each minute, on
average.

I can see that this is different from my assumed "apparatus function."
Is this how the actual VI experiments are set up? The mechanical analogy
would be a roulette wheel with 1/K slots. The wheel would be spun on
every time-tick, and if it came up in slot 0 the key would be enabled.
Is that the picture? It would be good to settle on the "apparatus" part
of the simulation before we get too far into uses of it.

This suggests that on every iteration we would calculate something like
this:

if not key_enabled then
key_enabled := random(10000) < K*10000;

and then later,

response := calculate_response;
reinforcement := key_enabled and response;
if reinforcement then key_enabled := false;

Does that look right? We could set a limit on the time delay, too, by
keeping track of time elapsed after a previous reinforcement.

Although the schedule does not guarantee a given rate of reinforcement,
it does specify the probability that a reinforcer will become available
after a given interval of time. Such opportunities can be lost (the
pigeon may fail to "collect" the reinforcer after it becomes available)
but this in no way negates the fact that they were there.

This suggests that if the pigeon fails to trigger the reinforcer after
some length of time, the opportunity is lost. I had assumed that once
the key was enabled, the apparatus would simply wait for the next peck,
no matter how long it took. ???

Empirically, both relative response rates and relative time spent
pecking on a key (time spent being measured as time between switches)
yield matching, although rates tend to fit with less noise. I may be
missing something, but I think this is inconsistent with your analysis.

I made a mistake here, which Rick Marken uncovered by actually doing a
simulation. If the pigeon is pecking at a constant rate, and simply
spends equal time on both keys, then the number of pecks is necessarily
equal on both keys; I didn't see that (dummy!). So matching would NOT
occur under the conditions where I said it would. I violated my own rule
of not making claims without backing them up with a simulation (or an
experiment, of course). What Rick has shown is that matching _can't_
occur if pecking occurs at a constant rate, no matter how the choices
are distributed. I think this will hold up even with the new way of
determining intervals, but I'll wait for the simulation results this
time. So we have to study the effects of variations in pecking rates on
the two keys.

Perhaps it has something to do with your assumption of a rectangular
distribution of schedule intervals vs the constant probability
distributions actually used. I'll have to give that some more thought
when I get some time to do so.

No need. As long as you're sure that your method of generating intervals
is equivalent to what is actually used during experiments, we can just
use your method.

I have a simulation of a simple one-choice operant conditioning
experiment that reproduces data of Staddon's -- we can get into that
later. I've only looked at FR schedules, but it would be interesting to
look at all the others, too.

My own schedule is getting hectic (which is frustrating for me as I am
definitely having more FUN doing this, and learning something to
boot!), so there will be a delay of at least several days before I can
respond with some programming.

Good. That will give me more time, too. Rick (jump-to-conclusions)
Marken, of course, will probably have the new VI scheduling routine
working before either of us is ready.
----------------------------------------------------------------------
Bruce Buchanan (941017.0100 EDT)--

All I can say is that these are not Worlds 1, 2 & 3 as Popper defined
them or as I attempted to describe them. They have nothing to do with
Popper and cannot provide any basis for commentary on his views.

They have nothing to do with what Popper _said_, but I must claim, by my
own theory, that they have to do with _how Popper was able to say what
he said_.

They simply assimilate Worlds 1 and 2 to the terms of World 3, within
which it is assumed everything can be modelled.

Not quite, I think. What I say is in principle modelable, and thus
testable against experiment. What Popper said is not testable against
experiment.

Popper made statements about worlds 1, 2, and 3. These were statements
in propositional form which, given a model of the symbol-handling
systems in Popper's brain, (or any model you like, whether it involves a
brain or not) and given the propositions which form the basis for the
statements, could be generated by that model. We could test a model of
Popper's thought processes by giving it the same propositions with which
Popper started, and showing that the model would generate similar
statements about worlds 1, 2, and 3. The model would be falsified if the
test showed it did not generate the same statements that Popper did.

Given only the statements about the three worlds, however, what is there
to model or falsify? Either you accept those three statements as givens,
or you don't. I offer a brain model to explain those statements: Popper
was using his 9th level of control systems (among others). If it turns
out that there is no 9th level, or that I have incorrectly defined the
level, or that some other process entirely is involved, then my brain
model is falsified. What are the conditions under which Popper's
propositions could be falsified?

... not everything in the universe and in man _can_ be modelled
conceptually.

What we cannot conceive cannot be modeled conceptually, I agree. But by
definition we cannot conceive what we cannot conceive, so there is no
way we can even know that a model is needed.

Of course, we can only talk about what can be modelled. About Reality
we cannot speak.

Just so.

Consensual validation can occur only if each person's impression of
agreement from other people is objectively ("World 1") correct. . . .

That is not the way I understand "consensual validation", which means
more modestly the best version of facts we can agree upon to date.

But how do we determine that we are in agreement? Just think about it
seriously for a moment. Suppose I tell you that I see an egg in front of
me, and you agree that there is an egg in front of me. How do we find
out what we are agreeing about? Can you think of any procedure, any
conversation, any method at all that will allow me to determine the
subject-matter of our agreement in any objective way? I ask this in an
attempt to point out that "consensual" means no more than "sensory," and
that "validation" is a slippery as well as subjective term. If you can
disabuse me of that idea, please do.

We cannot carry on a discussion if we do not accept or agree, at least
provisionally as a basis for a particular discussion, on the
meanings/referents for the words we use.

Well, people carry on discussions all the time without meeting that
criterion, but that's a side-issue. The real question is what meanings
or referents we can assign to terms -- meanings or referents other than
our own perceptions.

I agree that "The thinking and the knowledge are things we can model as
brain processes." Presumably these are what is Known. Does this not
imply that there may be things about the Knower, distinct from the
Known, that may not themselves be completely modelled as brain
processes?

Yes, it does imply that. My stance, however, is that before we can say
what phenomena tell us about the Knower, we must first go as far as we
can in modeling the Known, so we don't confuse brain functions with
properties of the Observer. Such confusions abound, for example Edwin
Locke's discussions of logical thinking as a property of consciousness.
The point of the PCT model, as I said in B:CP, was to account for what
we could understand in terms of a brain model, thus making clearer what
we could not account for.

My statement was to the effect that equating everything to do with the
mind with brain functions seems to assume that all the concepts of one
language (or conceptual frame of reference) can be substituted for
another.

But again I ask, if concepts and conceptual frames of reference, and
language itself, are not manifestations of brain function, then what are
they, what kinds of processes bring them about? How do we explain them?
It seems to me that we are talking about two different things. You are
telling me what we do with language and concepts, and I am talking about
how we do them. I am concerned about how a concept exists, where it
exists, how it can have any effects. It doesn't matter to me what the
concept is; the problem is the same whether you try to explain a
proposition or the negation of the same proposition. The problem is the
same no matter what linguistic conventions you use, no matter what the
frame of reference or the lexicon. It is still the question of how such
things can exist, where they exist, what their nature is, how they can
relate to the world of the senses.

Thus, a major premise is that everything can be described in terms of
neural signals, i.e. potentially in a single physicalist language, i.
e. in terms of PCT. So, there still remains, for me, some ambiguity as
to whether PCT is a theory of living control systems or a theory of
everything, and I do not see the latter as justified.

The major premise is there by default, since nobody has offered any
alternative that can deal with the same phenomena, the phenomena of
control. The question, of course, is how much of behavior and experience
can be explained by seeing them as control phenomena -- and not just by
seeing them that way, but by demonstrating that they fit the
experimental requirements for determining whether control is happening.

PCT is certainly not a theory of everything; it has nothing to say about
quarks, the stock market, or celestial mechanics. Its relation to such
things is not that of a theory to other theories it displaces; rather it
is that of a theory that explains how it is that we can have theories,
and not only have them, but act upon them. PCT is not about the
happenstance content of the mind, which depends on heredity and
experience, on education and training. Some people agree with Popper's
criterion of falsifiability; others do not. PCT is not concerned about
such disagreements; it is concerned only with explaining how either
view, or any view, can exist and have effects. In fact, you can say that
PCT is a consequence of being concerned about such things. If you are
concerned not about what one philosopher or another says, but about how
a human being can do philosophy, then you are on the same trail that has
led to PCT.

What is the definition of the term reductionism? In the sense in which
I was using it I meant to imply that all higher functions might be
fully modelled in terms of neural signals. Whether or not there are
emergent properties is perhaps a more subtle question.

A point of definition: a neural signal is not a function. It is simply a
physical variable that at any given time has a certain magnitude or
frequency. "Function" implies processes that receive sets of neural
signals and generate new signals at the output that are specific
functions of the signals at the inputs. The form of the function is
determined by the physico-chemical interactions within and among
neurons, and the patterns of connectivity that exist. The neural signals
relate to other neural signals through such functions, but the functions
themselves are not neurally represented. If one neural signal represents
the integral of the product of two other neural signals, it indicates
the value of that integrated product, but does not represent the
function either of integration or of multiplication. In experience, we
can see that some perceptions are dependent on others, but we can't see
_why_ they are; the _why_ is hidden in the computational processes of
the neurons and their connections, and is not represented as a signal.
We must construct models to try to explain why perceptions are related
as they are.

... your perspective may be in major part a consequence of your
premises, i.e. that everything is neural function.

On the contrary, the premises are a consequence of attempting to explain
things that I can't otherwise explain. I would be happy to learn of a
viable alternative explanation of how we have thoughts, how we have
goals, how these things get translated into actions and consequence in
the world of experience. But simply listing specific thoughts does not
help explain thought. To say "universe of discourse" is to allude to
something that needs explaining, not to explain it.

I would agree that the functions of mind can be understood in terms of
hierarchical control systems. However, at higher levels I think that
the models involved are so complex and multidimensional that they leave
ordinary notions of mechanism far behind.

I agree. Most people who object to models of the brain as being
"mechanistic" have only very limited and often old-fashioned ideas about
what a mechanism can be. You have to play with signal-handling systems
and computers a lot before you begin to see that "mechanisms" have
almost unlimited capabilities; the word really just means
"organization." The boundary between "mechanism" and "organism" is
illusory, just as is the boundary between "mental" and "physical."

I don't think ideas can exist independently of the brain. However, I
do think that some of the more complex ideas of which we are capable
cannot really be mapped in terms of 3 or 4 dimensional models of our
so-called material realty, or of the brain we ordinarily conceive in
physicalist terms.

I think so, too. But as mentioned, we have to be very careful before we
can conclude that any specific phenomenon can't be handled in
physicalistic terms. We had better do all the handling we can in such
terms before we start making pronouncements about which phenomena have
to be understood in a different way.

I do not see that one cannot hypothesize that there exist independently
of the brain Realities which we require a brain to apprehend, and to
which we somehow relate, even when we cannot understand them except in
terms of neural signals.

But that is exactly what I do hypothesize. I have concluded, however,
that hypothesizing is a brain function, so that -- as far as I can see,
which is limited -- our understanding of an external world will always
be hypothetical and theory-dependent, and will never be directly
verifiable. We tell a hell of a good story about reality, but there will
always be better stories.

To me the model of the physical universe which we use to describe the
worlds of physics and neurology/brain etc. are just that - models in
the forms of neural signals, and by the same token not the only forms
of models and not necessarily a privileged level of reaggregation of
the data representing Reality.

Right. Show me a model that works better and I'll defect to it
immediately.

All I am saying, and it seems to me an unavoidable premise, is that our
validated theories somehow reflect a Real world which is otherwise
unknowable to us. While we think and talk in terms of models, our
actions have consequences for our perceptions through a real world in
which we exist - consequences of a kind which can be mapped by logic
but which also exist independently of the brain, as shown by their
effects.

No problem there. I especially endorse your observation that our actions
and their consequences, which we experience directly, are more
fundamental than any model. Models try to explain the consequences of
action by invoking invisible and imaginary worlds beyond the boundaries
of human perception, but they are always judged by how well they
explain, and predict, what we actually experience.

What I thought I had stated repeatedly and in many different ways is
that we do not perceive anything directly, that all of our knowledge is
inferential and symbolic.

But how do we perceive inferences and symbols? Forgive me for
continually scratching the same itch, but it keeps itching. If
inferences and symbols do not exist as neural signals in a brain, then
how do we become aware of them? Where do they exist? What is their
nature? What is an implication, that we can know of it?

Well, that's enough. There may be an unbridgeable gulf here, though I
would obviously hope not.

Me, too.
-----------------------------------------------------------------------
NOTE TO CHUCK TUCKER: Have you had any luck in getting information about
the graphics driver card in your machine? For others, both Chuck and a
usenet listener, Sam Saunders, have found that my demos run way too fast
on their machines. Since the timing of the demos depends on reading the
chip registers on the video card, there is obviously some kind of video
card that my program can't recognize. Any help will be appreciated. I'll
re-write the programs as soon as I know what to do.
-----------------------------------------------------------------------
Best to all,

Bill P.