MISC PCT subjects (hey is this subj information-rich?)

[Chris Cherpas (951110.1046 PT)]
[re:> Bill Powers (951109.2040 MST)]

In PCT, a perception is a neural signal that is a function of some
aspect of the sensed world.

OK. EAB just says if there's no break in the causal links, this
always brings us back to the sensed world, so let's talk about the
world instead of the perception, per se. BFS calls "perception"
another form of behavior (as opposed to motoric). I guess PCT
questions how good a heuristic this is for understanding organisms.

Religions can be seen as attempts to account for
phenomena, but the phenomena are of a kind that science has shied away
from -- such as consciousness.
I don't buy the theories, but I respect
the discussions of the phenomena. Perhaps this is why I'm not shy about
using terms like wanting, purpose, perception, and so on. I'm looking
for scientific explanations of these phenomena, which I think can be
found -- and not by dismissing them as illusions.

"The operational analysis of psychological terms" (Skinner, 1945, Psych
Review, 52, and in BFS's "Cumulative Record" book) is an attempt to
describe how/why we are able to use these terms. It's the beginning of
a more fine-grained approach that (the later) Wittgenstein proposed, which
is essentially, "look at all the conditions under which people say X,
and that's what X means." BF also spewed lots of "X is an illusion,"
too, though, so it's easy to be put off. That aside, there's still
the problem of WHICH interpretation of "perception" do you mean (i.e.,
under what conditions do YOU say "perception." Again, I think "purpose"
adds nothing to "reference level" and is mixed in with the traditional
stuff that J.R.Kantor (in conversation) called "spookology."

[... on Current VIs procedures...]

Let me first say that there's lots of variation in how Concurrent VIs
are implemented, and even some written about just that, but that the
"Procedure" section of an article should tell what was really done.
What they should do is use state diagrams. My PhD advisor, Arthur
Snapper, adapted a notation which is ideal for the job, and, in fact,
it translates very easily into computer-runnable programs, in a
software system call "SKED." In fact, I would STRONGLY recommend
that PCT-ers document their procedures with state notation -- it
should be VERY friendly to folks in the control business.
But, on to the specifics:

When does an
"interval" begin? At the instant the reinforcement is delivered?

I used pigeons who had, say, 3" of access to a grain hopper,
and stopped the VI timers during delivery. Some people throw
away the first interval of a session anyway, or start the
session with a little hopper treat, which is kind of nice.
So, in general, the interval starts when the hopper comes
back up (i.e., is no longer available).

Are responses counted if they continue for a while even after the
reinforcement is delivered? With which reinforcement are they
associated, the previous one or the next one? Is the collection time
included? If so, how do you tell when the animal is ready for the next
interval, and not still chewing away and cleaning its whiskers (or
beak)?

You could record that, but no self-respecting pigeon would rather
peck a piece of plastic than eat. Remember, we're talking about
MANY sessions in which every time the hopper is open for feeding,
lights change, a solenoid goes click, the hopper goes clunk, and
the birds basically can't wait to stop pecking and start munching.
So, if you look in the chamber, you don't see much wasted pecking.

For rats and pellets it can be more complicated -- hence the use
of little dippers of Carnation instant milk. One control that's
handy for rat levers is to have retractable levers, so that when
the food is ready the lever disappears. That doesn't directly
solve the problem of cleaning my whiskers, though. In that case,
one COULD require a "done eating" response to on a special
operandum to recycle the schedule.

If the animal takes an extra long time on one key, does the
interval on the other key keep running?

Yes, indeed. That's part of what makes these "concurrent." I
think it was Jack Findley (who started the changeover key approach)
who tried the stop-the-o ther-timer procedure, reported it in JEAB,
and then everybody forgot about it.

If the animal switches keys
before an interval has timed out, is the reward available immediately
when the animal gets back to the same key again?

In theory, yes. In practice, the use of a changeover delay will slow
this down -- basically to avoid there being one response (back and forth
and back and forth and...), particularly since the birds are sometimes
initially shaped to go back and forth before getting into the real
experimental condition. As I've posted to Bruce a few back, I've
used a procedure that doesn't use a COD and that made me sleep better.
Herrnstein called the COD a "procedural wrinkle" as I recall, but
I always thought it kind of sucked. Some studies (maybe Baum) showed
long CODs will tend to give you more overmatching, so you can see there's
quite a bit of slop from experiment to experiment.

What happens when there
are multiple pecks on a changeover key? Does this result in multiple
switches, or is there a built-in delay to make multiple pecks
ineffective? Does there have to be at least one peck on the operative
key before another changeover switch can occur? One reinforcement?

Realize first you don't HAVE to use a changeover key, but I like it.
Multiple pecks on a CO key interested me in some pilot work I did on
matching an levels of food deprivation, because I refused to use a
COD. I collected interresponse-times (IRTs) on the changeover key
(these are called ICTs -- interchangeover times) and got a big spike
at less than .5 sec, which corresponds to a kind of a possible "elicited"
peck. For some pigeons, one peck means a little seizure of multiple
pecks, especially if their relatively more deprived. (But, by the way,
you don't have to do the "80% free feeding weight" procedure. I've
just used longer sessions and longer hopper times and kept my birds
at close to 100% -- in fact, you generally get less undermatching
with less deprivation and even overmatching with a stuffed bird).

Anyway, as in my recent post to Bruce, I say turn off the key-light
on the CO key until the bird makes at least one peck on the changed-to
schedule key. Wild birds don't seem to be as peck-crazy as arian-race
lab types, but, in any case, don't give them a choice on this: you
switched, now you commit to this. On the other hand, don't require
they pick up a reinforcement at a side in order to switch again, that's
makes it too much of a chained procedure: we want freedom of choice
wherever possible, but not freedom to not even notice there are
different schedules available. Also, I say it's better to use
"linear VI" where just because a reinforcement gets scheduled, don't
wait to start timing the next interval; however, again, that doesn't
mean timing during the hopper access (rft) either.

I didn't realize how complex a "simple" choice experiment
becomes when you get down to the details of programming one.

I know what you mean. Again, I strongly recommend using, and
publishing, a state diagram for any such procedure. I agree
with Snapper (mentioned above) that every such article should
include this.

What's melioration theory? I know what "melioration" means -- making
things better -- but I don't suppose that is the right meaning.

Herrnstein eventually came up with an explanation for matching (actually
it was Will Vaughan's idea if you ask me, but Harvard's employment
policies don't seem to preclude slave labor by non-faculty research
associates). Melioration theory is a relatively local (as opposed
to economic maximization or optimal foraging accounts) approach, but
doesn't get into a peck-for-peck optimization trip either. Melioration
says if you detect a difference in local reinforcement value for some
alternative vs another, you'll spend a little more time on the other
one. "Local reinforcement value" can be operationalize in ConcVIs
as number of reinforcements picked up during time spent WITHIN each
of the two (or more) schedules -- a CO key helps you measure this
fairly accurately. In a steady state, with matching, the local
rates of reinforcement are equal even if the alternatives are
VI1' versus VI6' because you're spending six times as much time
on the VI1' (mmm, boy) than on the VI6' (yuck). In other words,
you hang out as long as you can on that good schedule, but every
once in a while it pays off to check on the lousy one.

When the schedules are shifted, let's say reversed, the difference
in local rates is huge because you're spending way too much time
on the (new) lousy schedule which hardly ever pays off, and every time
you check on the (new) wonderful schedule, there's a treat almost
immediately available. So you shift your distribution while "learning"
the new schedule arrangement (I put "learning" in quotes because this
is such a parametric kind of learning, instead of, say, a whole new
topography or new kinds of stimuli, but, hey, it's a new distribution
-- some might even say a new "reorganization.")

<[Bill Leach 951110.22:57 U.S. Eastern Time Zone]

[Chris Cherpas (951110.1046 PT)]

OK. EAB just says if there's no break in the causal links, this
always brings us back to the sensed world, so let's talk about the
world instead of the perception, per se. BFS calls "perception" ...

Ok, lets... for a moment look again at the sensed world.

The entire foundation of EAB and other experimental behavioural sciences
is that "scientific method" is a proper and valid approach to the study
(an assertion that the rest of us here believe also).

However, "right out of the box" a mystical property is asserted for a
physical object. This property is the property of being a "re-enforcer".

The behaviour of the measureable physical world is also described
mathematically in terms of forces and mass. Above the sub-atomic level
these formula are often quite accurate (as in better than 1 part in 1E8).

Living things are of course also made up of such material.

If we are interested in "what a living organism appears to be doing in a
particular situation" then establishing that situation and noting what
the organism does is appropriate. If OTOH, we are interested in why or
how, then a recognition that it is a control system that we are deal with
is fundamental.

the problem of WHICH interpretation of "perception" do you mean (i.e.,
under what conditions do YOU say "perception." Again, I think "purpose"
adds nothing to "reference level" and is mixed in with the traditional
stuff that J.R.Kantor (in conversation) called "spookology."

"Perception" just _IS_ the signal that is compared to the reference when
discussing control system or control loop operation. The observer or
"discussee" might talk about the perception in terms of other matters but
to a control loop the perception has no "meaning".

When used in PCT _discussions_, perception might also refer to input
signals that are not controlled but again, even this signal is the
internal signal of the subject. As both Martin and Bill have pointed out
in different ways in the "cat in the box" discussion, the observer
normally assigns a great deal of meaning to such perceptions that are not
likely present in the studied.

state diagrams

State diagrams do not work for closed loop systems. Indeed, a state
diagram is quite missleading for representing a non-discrete process.
The "states" for a control loop are "in control" and "not in control".
A diagram showing functional relationships can aid in understanding the
control theory formulas used to describe what is happening. Such is
normally used in PCT papers.

-bill