Implications

[From Rick Marken (980529.0810)]

Bruce Abbott (980529.0850 EST) --

You know, Bill, I too thought that one could actually work this out
logically, based on the mathematics of the system.... The only way
you can determine the logical implications of such a system is
to do experiments.

Not "logical implications", just "implications". The universal
error curve is a model of what _might_ happen when a variable
controlled by a coerced control system is forcibly pushed from
its reference state by a coercive control system. The model predicts
that the coerced system will "give up" (error goes to zero) when
the controlled variable is caused (by the coercer) to move far enough
from the coercee's reference for that variable. Whether this happens
or not is an empirical, not a logical, question. There are several
alternative empirical possibilities: 1) the coerced system changes
its reference for the coerced controlled variable or 2) the
system continues to generate maximum output even though doing this is
completely ineffective. These are not logical implications; these
are empirical implications of different models of control.

I imagine you would prefer to consider this a logical question since
you don't seem to know how (or be inclined) to study living control
systems. At least, you haven't shown any inclination over the
last 3+ years on CSGNet and you are still teaching classes with
course descriptions like this:

An introduction to the development and application of statistical,
quantitative, and measurement techniques pertinent to the
psychological sciences. Fundamental concepts of numerical assignment,
sampling theory, distribution functions, experimental design,
inferential procedures, and statistical control.

How, after all these years, can you still teach students that
"numerical assignment, sampling theory, distribution functions...
inferential procedures, and statistical control" are "pertinent"
to psychological science -- that is, to the science of living control
systems? Sheez.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Oded Maler (980529)]

Rick Marken (980529.0810)

[..]

  these are empirical implications of different models of control.

What exactly do you mean by *empirical* implication of *models*?

--Oded

[From Rick Marken (980529.0900)]

Oded Maler (980529) --

What exactly do you mean by *empirical* implication of *models*?

Check out the "S-R vs Control" and "Levels of Control" demos at:

http://home.earthlink.net/~rmarken/demos.html

These demos show plots of the behavior of a control model and
of a person. For example, in the "S-R vs Control" demo you see
the behavior of the input and output variables of a control
model (on the right) and the behavior of the same variables
for a human subject. The behavior of the input and output
variables of the control model is what I call an "empirical
implication" of the model; the model implies what we will see
(empiric) when we watch the input and output variables of a
human subject operating under the same conditions as the
model. And, lo and behold, that _is_ what we see. So one
empirical implication of the PCT model turns out to be "true".

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980529.1125 EST)]

Rick Marken (980529.0810) --

Bruce Abbott (980529.0850 EST)

You know, Bill, I too thought that one could actually work this out
logically, based on the mathematics of the system.... The only way
you can determine the logical implications of such a system is
to do experiments.

Not "logical implications", just "implications". The universal
error curve is a model of what _might_ happen when a variable
controlled by a coerced control system is forcibly pushed from
its reference state by a coercive control system. The model predicts
that the coerced system will "give up" (error goes to zero) when
the controlled variable is caused (by the coercer) to move far enough
from the coercee's reference for that variable. Whether this happens
or not is an empirical, not a logical, question. There are several
alternative empirical possibilities: 1) the coerced system changes
its reference for the coerced controlled variable or 2) the
system continues to generate maximum output even though doing this is
completely ineffective. These are not logical implications; these
are empirical implications of different models of control.

Look, Rick, I was merely pointing out a logical implication of a
control-system model in which the perceptual signal is subtracted from the
reference signal. The implication is that the magnitude of the error signal
is limited to the magnitude of the reference signal. As Bill points out, if
the gain is reasonably high, the output may saturate long before the error
signal reaches this maximum. This potentially allows some room for the
error to continue increasing (beyond output saturation) and perhaps (in
theory) enter a regime in which the output begins to decline with further
increase in error. The closer the reference is to zero, the less "room"
there is for this process to occur.

Whether the "universal error curve" function actually exists in any real
living control system is another question entirely, and one that can only be
answered empirically. That is why I called the logical conclusion I drew
from the model a "potential" one. Apparently you don't know that that word
means.

I find it interesting that you find it necessary to resort to ad hominum
argument to present your case. That's usually a sure indication that the
person offering such an "argument" has nothing substantive to offer, and
must find some other way to cast aspersions.

I think that the "universal error curve" idea has its proper use -- one can
_simulate_ the effect a higher-level system would have in "turning off" the
lower-level control system -- but judge it an unlikely mechanism in the real
organism. To determine whether a "universal error curve" function actually
exists in any real living control system, one would have to do appropriate
studies. These will be difficult in that any action of higher-level systems
that might be confused with a universal error curve effect would have to be
prevented, somehow. Ditto for reorganization.

Regards,

Bruce

[From Oded Maler (980529)-2 ]

[From Rick Marken (980529.0900)]

  For example, in the "S-R vs Control" demo you see

the behavior of the input and output variables of a control
model (on the right) and the behavior of the same variables
for a human subject. The behavior of the input and output
variables of the control model is what I call an "empirical
implication" of the model; the model implies what we will see
(empiric) when we watch the input and output variables of a
human subject operating under the same conditions as the
model. And, lo and behold, that _is_ what we see. So one
empirical implication of the PCT model turns out to be "true".

I think you are mixing many things. The curves generated by the model
are *logical* implications of the model. They are "empirical" only
for those who cannot analyze them and resort to simulations :wink:
Even for them they are not really empirical unless you
consider mathematical models as part of the real world.

The model does not "imply" the human behavior. At most it predicts
it. I guess that what you wanted to say and you used a word whose
commonl usage is much different.

If the model is a good model and it exhibits a behavior x then this
implies that we are likely to see a behavior close to x in the real-world
phenomenon modeled by the model.

--Oded

[From Rick Marken (980529.1320)]

Bruce Abbott (980529.1125 EST)--

I find it interesting that you find it necessary to resort to ad
hominum argument to present your case. That's usually a sure
indication that the person offering such an "argument" has nothing
substantive to offer, and must find some other way to cast aspersions.

Caught me. I have nothing substantive to offer and other than my
ad of your hominum;-)

Bill Powers (980529.1959 MDT)]

Rick is convinced that you will never understand PCT.

I have a high degree of conviction, yes, but I'm still not
_convinced_:wink:

Nothing you can say or do will change his mind -- like Ken Starr,
he made it up a long time ago, and now all he's concerned with is
proving himself rignt.

Not true. There are many things Bruce could do to reduce my conviction.
I've mentioned some of them: he could do some PCT research and
publish it; he could rewrite his research methods text so that
it describes PCT research methods; he could write a paper on
how to study "operant" behavior proprely (with the aim of determining
controlled perceptions); he could publish a PCT model of operant
behavior; he could write a book on Testing for Controlled Variables
instead of on Statistics; he could solicit my advice on how to study
living control systems instead of lecturing me about how conventional
psychological researchers are already engaged in the search for
controlled variables; he could suggest (or produce examples of) ways
to improve and extend my studies of control instead of dismissing
them as "trivial".

The fact is that I am not interested in proving myself right. No
one would be happier than me if it turns out that I am wrong about
Bruce. If Bruce starts doing and publicizing the results of research
that demonstrates a clear understanding of behavior as the control
of perceptual variables, if he starts reacting to my posts as though
they might be of some value rather than as though they were an attack
on his personal character, then you will see one happy little PCTer
over here in LA LA Land.

As far as Kenneth Starr is concerned, it's not his dogged pursuit
of the "truth" that I find so numbingly idiotic; it the "truth"
he's seeking that's so ridiculous.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980530.1005 EST)]

Bill Powers (980529.1959 MDT) --

Rick is convinced that you will never understand PCT.
Nothing you can say or do will change his mind -- like Ken Starr,
he made it up a long time ago, and now all he's concerned with is
proving himself rignt.

Actually Rick is doing a fine job of demonstrating a problem with theory
testing that arises when one looks only for evidence that will _confirm_ a
theory, something called a "confirmatory strategy." Although there is
nothing wrong with such a strategy per se, it can lead to erroneous
conclusions when used by itself. To avoid this problem, one needs also to
employ a "disconfirmatory strategy," which involves testing to determine
whether results occur which should _not_ occur if the theory is correct.

These strategies and an example of their use are presented in my book,
_Research Design and Methods: A Process Approach_, available from Mayfield
Publishing at

http:\\www.mayfieldpub.com\

If Rick's theory about my understanding of PCT is correct, then I should not
be able to reason correctly about the implications of a negative perceptual
signal with respect to the limitation on the range of the error signal that
this arrangement imposes. However, I was able to do so [see Bill Powers
(980529.1959)]. This provides disconfirmatory evidence for Rick's theory
with respect to my understanding of control theory. However, psychological
research has shown that people tend to accept what fits into their current
schema but ignore, reject, or distort information that does not. Thus Rick
is likely to ignore or rationalize away this disconfirmatory evidence,
rather than allow it to affect his view of me.

Regards,

Bruce

[From Bill Powers (980530.1123 MDT)]

Bruce Abbott (980530.1005 EST)--

Actually Rick is doing a fine job of demonstrating a problem with theory
testing that arises ...

Believe it or not, I have zero desire to pursue this matter any further.
Who is right is of no interest to me.

Best,

Bill P.

[From Bill Powers (980530.1103 MDT)]
Rick Marken (980529.1320)]

Not true. There are many things Bruce could do to reduce my conviction.
I've mentioned some of them: he could do some PCT research and
publish it; he could rewrite his research methods text so that
it describes PCT research methods; he could write a paper on
how to study "operant" behavior proprely (with the aim of determining
controlled perceptions); he could publish a PCT model of operant
behavior; he could write a book on Testing for Controlled Variables
instead of on Statistics; he could solicit my advice on how to study
living control systems instead of lecturing me about how conventional
psychological researchers are already engaged in the search for
controlled variables; he could suggest (or produce examples of) ways
to improve and extend my studies of control instead of dismissing
them as "trivial".

But all this assumes that Bruce is under some obligation to live up to your
demands, please you, get your approval, or ask for your advice. I don't see
that any of that is true. Bruce will evaluate PCT as he sees fit; he will
compare it with other approaches according to his understanding of them; he
will retain or abandon other methodolodies as he chooses.

Would you have it any other way?

Best,

Bill P.

P.S. Our subdivision totally lost its water supply, so we're delaying our
departure to Boulder by one day. Means one day in Boulder instead of two.

···

The fact is that I am not interested in proving myself right. No
one would be happier than me if it turns out that I am wrong about
Bruce. If Bruce starts doing and publicizing the results of research
that demonstrates a clear understanding of behavior as the control
of perceptual variables, if he starts reacting to my posts as though
they might be of some value rather than as though they were an attack
on his personal character, then you will see one happy little PCTer
over here in LA LA Land.

As far as Kenneth Starr is concerned, it's not his dogged pursuit
of the "truth" that I find so numbingly idiotic; it the "truth"
he's seeking that's so ridiculous.

Best

Rick
--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

bruce a. , until your research is centered around the Test none of the
strategies outlined in your text matter to myself and i'm sure others--with
respect to PCT. This is not to denigrate your analysis of how science
progresses, but this net is not for the history nor philosophy of science. It
is about elucidating the logical peculiarites of the theory and basic
research. Simply put, research talks. Maybe next year the conference will
be closer to your home. Maybe then you could meet Rick and the two of you
knuckleheads can exhaust yourselves and then begin to actually talk. I would
like to talk to you.

i.

[From Rick Marken (980530.1220)]

Bruce Abbott (980530.1005 EST)--

Actually Rick is doing a fine job of demonstrating a problem
with theory testing that arises when one looks only for evidence
that will _confirm_ a theory, something called a "confirmatory
strategy."

This would, indeed, be a bad approach to theory testing; it implies
that disconfirmatory evidence is ignored. What you call a
"confirmatory strategy" is what I call "scientific fraud". When
you test a psychological theory you obtain evidence by observing
behavior in carefully contrived situations. You can't know in
advance whether or not what you observe will "confirm" (be consistent
with) or disconfirm (be inconsistent with) the predictions of the
theory. The evidence you get is the evidence you get; you're not
supposed to contrive (control) the results of your tests. So it
is impossible (in principle if not in practice) to do science using
either a "confirmatory" or a "disconfiormatory" strategy. The
only scientific startegy (Oded notwithstanding) is testing to
determine whether or not the empirical implications (predictions)
of a theory match what is actually observed.

Although there is nothing wrong with such a strategy per se

In fact, there is something _very_ wrong with this strategy; to
carry it out one would have to ignore or conceal disconfirmatory
evidence. That's just scientific fraud.

These strategies and an example of their use are presented in my
book, _Research Design and Methods: A Process Approach_,

I hope you presented these strategies as examples of bad science.
Science is done to _test_ theories, not to confirm or disconfirm
them. Doing science right is _very_ hard because it requires
that one "rise above" (or, at least, act as though one has risen
above) one's own controlling nature so that one does not _care_
whether the evidence confirms _or_ disconfirms a theory.

available from Mayfield Publishing at
http:\\www.mayfieldpub.com\

I think it would be _very_ instructive for everyone to go to
this site. I think it shows very clearly what is at stake here:
money. Mayfield is making money -- apparently big money -- by
selling textbooks like yours. They are certainly not going to
make money selling textooks on "Testing for controlled variables"
because that's not what's taught in college research methods courses.
This is what PCT is up against. I bet that the desire for money will
work far more effectively against the new science of PCT than
the desire to be right worked against the new science of Galileo.

If Rick's theory about my understanding of PCT is correct, then
I should not be able to reason correctly about the implications
of a negative perceptual signal with respect to the limitation
on the range of the error signal that this arrangement imposes.

That's not what my theory of you predicts. My theory of you is
that you understand the operation of the basic PCT model; you
also seem to understand the operation of the HPCT model (though
there are some gaps, as evidenced by your discussion of conflict).
You also want the PCT model to be consistent with the data and
methods of conventional psychology. So one prediction is that
you will do a good job of describing the operation of the PCT
model and that you will also do a reasonably good job of applying
this model to anecdotal descriptions of behavior. Another prediction
is that you will fail to see that 1) the individual model of PCT is
fundamentaly inconsistent with the conventional statistical approach
to research 2) that conventional research gives us little more
than hints about the perceptions that organisms might be controlling
and 3) that PCT implies a completely new approach to studying
behavior. So far I have seen no evidence that _disconfirms_ my
theory of you; I keep looking for it; I keep doing experiments
to test for it (I asked, for example, for your review of my _Psych
Methods_ paper, hoping to get wild praise -- which would be a major
disconfirmation of my theory --but instead getting "psychologists
already know that" -- which is consistent with predicitons 1) and
3) above) but I've gotten no disconfirmatory evidence yet.

However, I was able to do so [see Bill Powers (980529.1959)].
This provides disconfirmatory evidence for Rick's theory
with respect to my understanding of control theory.

As you can see, it doesn't provide disconfirmatory evidence at all.
I am impressed by your modeling skills and with your understanding
of the PCT model. I recognize and celebrate this aspect of your
understanding of PCT. But the aspect of understanding PCT that
is particularly important to me -- the aspect that has to do with
studying living control systems -- still seems to elude you; and
I predict that it will continue to elude you as long as you harbor
the desire to see merit in the methods and data of conventional
psychology.

However, psychological research has shown that people tend to
accept what fits into their current schema but ignore, reject,
or distort information that does not. Thus Rick is likely to
ignore or rationalize away this disconfirmatory evidence, rather
than allow it to affect his view of me.

This is a good example of how you cling to the idea that there
is merit in the data and methods of conventional psychology.
Here you are applying a group result ("people tend to accept what
fits into their current schema", which probably means that, say,
70% of the people in a sample accepted what fit their current
schema) to an individual (saying that I have this "tendency").

Look, Bruce, I don't want my theory of you to be right. If you
really want to make me happy and _disconfirm_ my theory of you
I've told you some of the ways you do it [Rick Marken (980529.1320)].
So get out there and kick my theoretical butt;-)

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rick Marken (980530.1235)]

Me:

Not true. There are many things Bruce could do to reduce my
conviction.

Bill Powers (980530.1103 MDT) --

But all this assumes that Bruce is under some obligation to live
up to your demands, please you, get your approval, or ask for your
advice.

Not really. I'm just saying that if Bruce _emitted_ these behaviors
it would disconfirm my theory of him.

Bruce will evaluate PCT as he sees fit; he will compare it with
other approaches according to his understanding of them; he will
retain or abandon other methodolodies as he chooses.

I'm right there with you!

Would you have it any other way?

Well, it certainly would be nice if I were in complete control of
Bruce's (and everyone else's) perceptions;-). But then I think
about the mediocre job I'm doing controlling just my own perceptions
and I figure it's probably best if I leave control of Bruce's
perceptions to Bruce;-)

Have a wonderful time out there in Brandenberg if I don't talk to you
and Mary before you go. I am really sorry I couldn't make it. Maybe
there will be many more in the future and Brandenberg will become
famous for both the concertos _and_ the PCT conferences.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rick Marken (980531.1040)]

i. kurtzer to Bruce Abbott --

Maybe next year the conference will be closer to your home.

To paraphrase Bill Powers, in a paper distributed at one of
the first CSG meetings, which were held near Kenosha, Wisconsin:

Today, Kenosha, tommorrow, Muncie!

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Abbott (980531.1955 EST)]

Rick Marken (980530.1220) --

Bruce Abbott (980530.1005 EST)

Actually Rick is doing a fine job of demonstrating a problem
with theory testing that arises when one looks only for evidence
that will _confirm_ a theory, something called a "confirmatory
strategy."

This would, indeed, be a bad approach to theory testing; it implies
that disconfirmatory evidence is ignored. What you call a
"confirmatory strategy" is what I call "scientific fraud". When
you test a psychological theory you obtain evidence by observing
behavior in carefully contrived situations. You can't know in
advance whether or not what you observe will "confirm" (be consistent
with) or disconfirm (be inconsistent with) the predictions of the
theory. The evidence you get is the evidence you get; you're not
supposed to contrive (control) the results of your tests. So it
is impossible (in principle if not in practice) to do science using
either a "confirmatory" or a "disconfiormatory" strategy. The
only scientific startegy (Oded notwithstanding) is testing to
determine whether or not the empirical implications (predictions)
of a theory match what is actually observed.

Let's try this again; I'm not talking about ignoring contrary evidence
(which would constitute very bad science indeed). To practice a
confirmatory research stragegy, you make a prediction, based on theory or
hypothesis, as to what _should be observed_ under particular circumstances
if the theory or hypothesis is correct. You then set up the circumstances
(or wait for them to occur naturally if they are not subject to your
control) and determine whether _or not_ the results _confirm_ the
prediction. For example, I have a rule in mind which produces a series of
numbers, and it is your job to determine the rule. You are allowed to ask
whether certain numbers are in the series. As a first guess, you theorize
that the rule is "even numbers." You ask whether "2" is in the series, and
I answer "yes." "Ah," you think, "this looks promising." You ask whether
"4" is in the series, and I answer "yes." This continues for 6, 8, 10, 12,
14, 16, 18, 20, and so on; in each case I answer "yes." Your theory has
been confirmed with each observation, so you now confidently assert that the
rule is "even numbers." I answer, "no."

The strategy you followed was a confirmatory strategy. Note that, although
it never happened, if I had said "no" to any of your numbers, you would have
rejected your hypothesis.

In addition to this confirmatory strategy, you should have used a
disconfirmatory stragegy. This would involve asking questions whose answers
in the positive would _disconfirm_ your theory. For example, if your
hypothesis was "even numbers," then no odd number should produce a "yes"
response. You ask whether 3 is in the series, and I answer "yes." This
cannot be the case under your hypothesis, so the hypothesis is disconfirmed.

To follow a disconfirmatory strategy, you look for data that should _not_ be
observed under the theory. PCT, for example, might be tested under a
disconfirmational strategy by observing whether the ability to emit a series
of well-learned movements (e.g., playing a tune on a piano keyboard) is lost
when tactile, kinesthetic, auditory, and visual feedback are blocked during
the task. Under PCT, such feedback is crucial, so one would not expect the
person to be able to perform the movements as learned; the observation that
the person can do so would disconfirm the theory.

This is all explained in my book, _Research Design and Methods: A Process
Approach_, which represents a shameless attempt by greedy capitalists
(Mayfield Publishing, my coauthor Ken Bordens, and myself), to present at an
introductory college level all facets of the research process, from
searching the literature to publishing the results (and maybe even receive
some financial reward for having done so in a competent way).

available from Mayfield Publishing at
http:\\www.mayfieldpub.com\

I think it would be _very_ instructive for everyone to go to
this site.

Me, too. There you will find a nice chapter outline to inform you of the
range of topics covered in the text, which range from how to get a research
idea through how to ethically treat human participants and animal subjects,
to how distortions in the published literature are introduced by current
research publication practices such as the use of statistical significance
testing and the p < .05 criterion.

However, psychological research has shown that people tend to
accept what fits into their current schema but ignore, reject,
or distort information that does not. Thus Rick is likely to
ignore or rationalize away this disconfirmatory evidence, rather
than allow it to affect his view of me.

This is a good example of how you cling to the idea that there
is merit in the data and methods of conventional psychology.
Here you are applying a group result ("people tend to accept what
fits into their current schema", which probably means that, say,
70% of the people in a sample accepted what fit their current
schema) to an individual (saying that I have this "tendency").

If 70% of Americans are overweight, this does not mean that you will have a
tendency to be overweight. It means that if you are an American, the
chances are better than even that you are overweight, given that I have no
other information about you personally.

If fact there are good reasons why most people are reluctant to accept
information that does not seems to "fit" with what they believe they already
know. For one, they've usually been looking for (and found) a lot of
confirmatory evidence which gives them confidence in their current views.
For another, changing these beliefs may require abandoning cherished
assumptions and may lead to a period of considerable uncertainty concerning
what to believe or not to believe -- a very unconfortable state for most.
Or they may simply not want to do the cognitive work involved in sorting
things out. It's easier, all things considered, to find a way to reject the
offending information and retain the old beliefs, or failing that, to
reinterpret the information so that it appears to "fit" after all.

Interestingly, your responses to my post indicate that the generalization
does indeed apply to you, personally, but then again, I could simply be
trying to put them in a context that will support my prior belief.

Regards,

Bruce

[From Rick Marken (980531.2200)]

Bruce Abbott (980531.1955 EST) --

In addition to this confirmatory strategy, you should have used
a disconfirmatory stragegy. This would involve asking questions
whose answers in the positive would _disconfirm_ your theory.
For example, if your hypothesis was "even numbers," then no odd
number should produce a "yes" response. You ask whether 3 is
in the series, and I answer "yes." This cannot be the case under
your hypothesis, so the hypothesis is disconfirmed.

Given this definition, I have done several experiments using the
"disconfirmatory" stratergy. The "Open Loop Control" demo at

http://home.earthlink.net/~rmarken/demos.html

is an example of such an experiment; if the results ("answer") of
this experiments are "yes, control is just as good without
perception of the cursor as with it" we would have disconfirmation
of PCT. The same strategy is used in the experiment described in the
"Closed loop behavior" chapter (p. 67) of Mind Readings. Check it
out.

I also used this strategy on you when I asked for your review of
my "Dancer.." paper. My theory was (and is) that you are a
conventional psychologist (an "even number generator") and that
you would find nothing significant about (you would say "no" to) a
paper ("question") trashing (though subtly and elegantly) conventional
methodology (the paper is analogous to your "odd number" question).
Sure enough, you didn't see anything significant in the paper (you
did not say "yes, that was a great paper"); so you did _not_
disconfirm my theory; my "disconfirmatory strategy" didn't lead
to disconfirmation of my theory of you.

···

-----

I'm sorry I started this little spat, Bruce. I guess I keep hoping
that you'll start seeing things my way and start making some
substantive contributions to the development of PCT science. Then
you pop up on CSGNet again, still steeped in the conventional point
of view, with nothing to report in terms of any efforts to teach,
study or promulgate PCT. My problem (Bill's theory of me
notwithstanding) is not that I am hoping that you never get PCT;
my problem is that I keep harboring the desperate hope that you
_will_ get it.

But, as Bill said, you're going to get from PCT what you're going to
get from it and there is nothing I can do about it. PCT seems
to fit nicely into conventional psychology for you; I guess that's
the way it's going to be. Have a wonderful time with it; if you
ever publsih anything on PCT, please let me know; I'd be interested
to see what you come up with.

I'm going out of town for a week so the classroom is all yours if you
want it. Enjoy.

Best

Rick
--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/