Why We Fight (was Re: Thermodynamics)

[From Rick Marken (2014.08.21.1920)]

···

Philip (8.20.14 @ 20:01)

RM: Thanks Philip; this was a very thought provoking post for me. It really got me to go up a level by getting me to think about what these conflicts are about.

PY: Jesus Christ, you should be ashamed of yourselves. Stop fighting and let’s get to the heart of this.

RM: I believe that this “fighting” is the heart of it, for me anyway. My interest in PCT is scientific and science is a social enterprise and, as in all such enterprises, there will be conflict . This is because people have different beliefs regarding the way things, like human nature, work. So people doing science are often in conflict about whose beliefs are correct. The nice thing about science is that these conflicts are ultimately resolved by testing these beliefs against observation. But in order to do this we have to have a nice, clear idea about what these beliefs are so that we know what observations are predicted.

A formally developed scientific belief is a scientific theory. PCT is such a belief. The process of getting to a nice, clear idea of what a theory like PCT predicts involves getting the theory into a form, outside of peoples heads – as a set of equations or a software or hardware model – so that people can agree that that’s the theory. I think most of what are seen as “fights” on CSGNet are attempts to come to agreement about what PCT is so that it can be implemented as a mathematical or computer model in order to see what it actually predicts. The process of getting to these agreements can involve rather heated – I prefer to call it spirited – discussions but that’s the way conflict works; each side pushes harder and harder. But unlike arguments over other beliefs, such as religious and ideological ones, arguments about scientific theories are eventually resolved by observation (test).

PY: What is the purpose here? Are we trying to establish exactly “why PCT is necessary”?

RM: In this specific case my goal was to try to get to a point where I could develop a working model of what Martin was proposing and/or write a computer demonstration of how control of entropy worked (or didn’t). Although these arguments ramble all over the place this one actually gave me a great idea for a demo that I may implement if I get the time. The demo would show that control does not necessarily involve a reduction of entropy, where entropy would be measured as Shannon’s information measure, H =- Sum (p.i * log(p.i)).

PY: Is this a quarrel over whether a tracking task or an observation of decay in living organisms establishes the necessity of PCT?

RM: Yes. And other things as well. It’s pretty much going all over the place, as these discussions often do. But I always get something worthwhile out of them.

PY: I don’t even know.

RM: Neither do I, often.

PY: But you guys REALLY seem to be having a blast with this.

RM: I certainly am. I don’t know about Martin. He gets pretty exasperated with me so he might not be having as much fun.

PY: When I look at these conversations, I truly believe PCT is in a state of crisis.

RM: These kinds of conversations have been going on since PCT began in 1961; certainly since CSGNet began in 1990. So I guess PCT has always been in crisis. But I don’t see it that way. The only crisis for PCT would be if behavior turned out not to be a control process, which is highly unlikely.

PY: There is only one way to put an end to this.

We all need to answer the following question: what is the most serious threat to PCT?

RM: Treating it as revealed truth.

PY: We all know in our heart of hearts that PCT is accurate and true (meaning that: when we came across PCT, we felt intuitively that we knew something which we never knew before). We are absolutely certain that Bill Powers was a good man and that his ideas were powerful. But Bill Powers had died before he had ever seen PCT blossom into fruition. And PCT has still not blossomed. And PCT will not blossom at this rate. We’ve been seeing the same exact sentences describing the meaning of control for decades now.

RM: I think PCT “blossomed” back in 1960, when Bill first developed it. It hasn’t “blossomed” in the sense of becoming generally accepted in the behavioral/life sciences because there is tremendous inertia preventing this from happening (textbooks, curricula, careers, reputations, etc). I don’t believe PCT is being prevented from blossoming in the sense of becoming more popular by arguments on CSGNet.

PY: Very good. We know that PCT explains every observation we have. And it does so WITHOUT exception. Thus, unlike most scientific theories, PCT is essentially immune to falsification.

RM: PCT is certainly falsifiable. Only religious beliefs are not falsifiable, which is what makes them so uninteresting.

PY: Why has QM - a theory which has absolutely NO(!!!) intuitive appeal - become a household name, whereas PCT is relegated to obscurity?

RM: For the reasons I gave above: mainly the inertial forces that now exist in the behavioral sciences.

PY: There needs to be a link between quantization and PCT and I think I can find it.

RM: That’s fine. But that approach to science doesn’t interest me much. I like building models and testing them against actual observation. That’s what originally attracted me to Powers’ work and it’s really what I enjoy doing most.

RY: Rick and Martin (because it’s mainly you two), I promise not to post anything stupid or snobby if you don’t either (we all know I’m the master of the snobby post anyway).

RM: I’m sure I’ve posted some stupid stuff but the fact is that I enjoy the spirited interchanges on the net, as long as they are carried on in good faith and are based on a reasonably good understanding of PCT. I’ve certainly benefited from several such interchanges over the years, with Martin and others (including Bill Powers). I think these spirited interchanges are a sign of a healthy, scientific interest in understanding the controlling done by living systems.

Best regards

Rick


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[Martin Taylor 2014.08.21.23.16]

Refrigerators do control entropy, at least if the refrigerator

contents don’t change, and that’s all they do, but I’m confused as
to why you would want to bring control of entropy into this
discussion. Most control systems are controlling other variables.
You are the only one who has talked about controlling entropy.
That would be truly fascinating. To reduce the variability of an
internal variable while not reducing its uncertainty, keeping the
units of measure constant, would be a real tour-de-force, if not a
contradiction in terms.
Remember that if a variable is distributed with a normal Gaussian
distribution of standard deviation S, the uncertainty is given by U
= log2(sqrt(2pie)*S) (Shannon, 1963 edition p 89). In other words,
if the control ratio (RMS expected)/(RMS actual) is, say 8, the
uncertainty (entropy) is reduced by 3 bits.
I do not see how you will create a demo of a control system that
actually stabilizes a variable while not reducing its entropy
(uncertainty). But if you can pull it off, you might get a Nobel
Prize.
Martin

···

[From Rick Marken (2014.08.21.1920)]

Philip (8.20.14 @ 20:01)

              PY: What is the purpose here?  Are we trying to

establish exactly “why PCT is necessary”?

          RM: In this specific case my goal was to try to get to

a point where I could develop a working model of what
Martin was proposing and/or write a computer demonstration
of how control of entropy worked (or didn’t).

          Although these arguments ramble all over the place this

one actually gave me a great idea for a demo that I may
implement if I get the time. The demo would show that
control does not necessarily involve a reduction of
entropy, where entropy would be measured as Shannon’s
information measure, H =- Sum (p.i * log(p.i)).

Bravo, Rick! Once again, the consummate scientist :slight_smile:

Andrew

···

On Thu, Aug 21, 2014 at 10:31 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2014.08.21.23.16]

[From Rick Marken (2014.08.21.1920)]

Refrigerators do control entropy, at least if the refrigerator

contents don’t change, and that’s all they do, but I’m confused as
to why you would want to bring control of entropy into this
discussion. Most control systems are controlling other variables.
You are the only one who has talked about controlling entropy.

That would be truly fascinating. To reduce the variability of an

internal variable while not reducing its uncertainty, keeping the
units of measure constant, would be a real tour-de-force, if not a
contradiction in terms.

Remember that if a variable is distributed with a normal Gaussian

distribution of standard deviation S, the uncertainty is given by U
= log2(sqrt(2pie)*S) (Shannon, 1963 edition p 89). In other words,
if the control ratio (RMS expected)/(RMS actual) is, say 8, the
uncertainty (entropy) is reduced by 3 bits.

I do not see how you will create a demo of a control system that

actually stabilizes a variable while not reducing its entropy
(uncertainty). But if you can pull it off, you might get a Nobel
Prize.

Martin

Philip (8.20.14 @ 20:01)

              PY: What is the purpose here?  Are we trying to

establish exactly “why PCT is necessary”?

          RM: In this specific case my goal was to try to get to

a point where I could develop a working model of what
Martin was proposing and/or write a computer demonstration
of how control of entropy worked (or didn’t).

          Although these arguments ramble all over the place this

one actually gave me a great idea for a demo that I may
implement if I get the time. The demo would show that
control does not necessarily involve a reduction of
entropy, where entropy would be measured as Shannon’s
information measure, H =- Sum (p.i * log(p.i)).

[From Rick Marken (2014.08.22.1130)]

···

Martin Taylor (2014.08.21.23.16)–

MT: That would be truly fascinating. To reduce the variability of an

internal variable while not reducing its uncertainty, keeping the
units of measure constant, would be a real tour-de-force, if not a
contradiction in terms.

RM: I think you may be misunderstanding me because I was misunderstanding you. I re-read your Editorial in IJHCS and, again, found it to be excellent. I think my problem resulted from reading into your thermodynamic rationale for the necessity of PCT the implication that control always involves a reduction in the variability of the organism’s environment. In fact, you only talk about the importance of stabilizing (reducing the variability of) the organism’s internal environment; it’s “essential internal chemistry”. But you go on to say that this is done by controlling “important states of the outer world”. So I think I mistakenly read this to mean that you were saying that control also involves reducing the variability of the external environment. But, in fact, you never said that; you were correctly implying that control involves reducing the variability of perceptual aspects of the environment, keeping them from varying around their “desired states”.

RM: The demo I was describing was aimed, then, at demonstrating the incorrectness of a claim about control that you never made: that control always involves reducing the variability of variables in the external environment. My demo is aimed at showing that control could involve maintaining an increase (rather than a decrease) in the variability of a variable in the external environment. So while my proposed demo is irrelevant to anything you said, I still think it might be worth describing it because I think it does demonstrate some important facts about control.

RM: The demo would simply involve having a participant control the variability of a variable, such as the relative probability of occurrence of a square (S) and a circle © presented in a temporal sequence by the computer. So the participant would see a continuous binary sequence of Ss and Cs presented for, say, 1 second each. The variability of the sequence is determined by the relative probability of occurrence of an S, P(S), or C, P©, each second, where, of course, P© = 1-P(S).

RM: When P(S) = .5, P© = .5 and there is maximum variability of the sequence of Ss and Cs (and maximum information, H). When P(S) = 1.0 or 0.0 (so that P© = 0.0 or 1.0 respectively) then the variability of the sequence is minimum (0 actually). The participant is asked to control the variability of the sequence of Ss and Cs at some target level, such as keeping it at a maximum. This is done by moving the mouse as necessary to compensate for disturbances to the variability of the sequence produced by the outside world (in this case, the computer). The computer generated disturbance is simply a slowly varying change in the value P(S) (and thus P©). If the the participant is trying to keep the variability of the sequence at a maximum then as the computer reduces P(S) below .5, the participant will have to move the mouse so as to increase P(S); and vice versa when the computer increases P(S) above .5.

RM: What this simple little demo would show (I presume) is that people can control a perception related to the variability of an external environmental variable. It would also show that while control involves the stabilization (reduction in the variability) of a perception of some environmental state of affairs it does not necessarily involve the stabilization (reduction in the variability) of the environmental correlate of that perception (though it could, if the participant decides to keep the variability of the sequence at a minimum).

RM: While the demo is not relevant to your thermodynamic justification for PCT I think it still might be a worthwhile demo to build in order to demonstrate the conceptual difference between stabilization of a perception (keeping it close to a reference) and stabilization of the environmental correlate of that perception (which may involve an increase in the instability of that variable). Understanding this distinction would, perhaps, make it easier to understand “thrill seeking” and the desire for “variety” from a PCT perspective. Both seem to involve stabilizing (controlling) a perception of “instability” at a reference that is considerably greater than zero.

RM: Of course, another good reason to develop the demo would be because it could be the basis of a little research project aimed at determining just what perception is being controlled when people control variability, assuming that they can control variability.

Best

Rick

Remember that if a variable is distributed with a normal Gaussian

distribution of standard deviation S, the uncertainty is given by U
= log2(sqrt(2pie)*S) (Shannon, 1963 edition p 89). In other words,
if the control ratio (RMS expected)/(RMS actual) is, say 8, the
uncertainty (entropy) is reduced by 3 bits.

I do not see how you will create a demo of a control system that

actually stabilizes a variable while not reducing its entropy
(uncertainty). But if you can pull it off, you might get a Nobel
Prize.

Martin


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

          RM: Although these arguments ramble all over the place this

one actually gave me a great idea for a demo that I may
implement if I get the time. The demo would show that
control does not necessarily involve a reduction of
entropy, where entropy would be measured as Shannon’s
information measure, H =- Sum (p.i * log(p.i)).

[Martin Taylor 2014.08.22.14.41]

Actually, control of one perception always implies an _increase_ in

the overall entropy of the organism’s external environment, even
though it also always implies a reduction in the variability of one
degree of freedom
of its environment.
That’s true, but as you often point out, controlling a perceptual
(internal) variable also implies reducing the variability of the
corresponding environmental variable.
The variability of something is a degree of freedom in itself. If
that’s the perception you control, then that’s the environmental
degree of freedom that is made more stable than it would otherwise
be. The perception that is varying may be varying dramatically more
than it would if you were not controlling for having it vary wildly,
but the variable you are controlling (the variability) is not – at
least it is not if you are controlling well and with a fixed
reference value.
Remember that when you compute an RMS value, you divide by N-1. The
“-1” is the degree of freedom for the mean. But if you fix the RMS
and compute some other statistic, you would then have to use N-2,
because you had fixed another degree of freedom, and there are only
N degrees of freedom in a set of N samples. The same applies here.
If you fix the variability of a set of values, even ones spread over
time, that variability is a degree of freedom.
Actually, if the perception is a true function of the environmental
state, stabilizing one implies stabilizing the other. If what you
are stabilizing is a rate of variation of a perception, then you are
stabilizing the equivalent rate of variation in the CEV defined by
that perceptual function. When you do the TCV, it works only because
you (experimenter) can see something you guess might be the
corresponding CEV. If it stabilizes when the subject seems to be
controlling a perception, then at least your guess is related to the
perception. The more it stabilizes, the closer the relation. If that
were not so, then the TCV would be impossible.
Yes, even when I was quite young I thought that what people did was
aim for a certain level of variability at all levels of perception
(I got the levels of perception idea from my “Cassel’s Children’s
Encylopedia” [I think that’s the title] which explained Donders’
ladder of perception from sensation to, I think, “apperception”).
More variation at one level would be balanced by less at another
level, or so I thought at the time. Actual stabilization is usually
quite boring, but stabilization of variation may not be.
Also good to do, but I would guess that it would be rather hard to
do, given the trade-off between precision kn value and precision in
time (it’s another of those complementary Heisenberg pairs). I’m
guessing that unless you somehow constrained them, subjects would be
inconsistent in how they balanced between setting “the variability
up to now” precision, and the precision of “now”. You can’t get an
accurate measure of variability from one sample, or even from three
or four, so the question is of how many samples the subject would be
including in the estimate of variability. The weighting of those
samples determines how far back was “now”.
Martin

···

[From Rick Marken (2014.08.22.1130)]

            Martin Taylor

(2014.08.21.23.16)–

                        RM: Although these arguments ramble all

over the place this one actually gave me a
great idea for a demo that I may implement
if I get the time. The demo would show that
control does not necessarily involve a
reduction of entropy, where entropy would be
measured as Shannon’s information measure, H
=- Sum (p.i * log(p.i)).

            MT: That would be truly fascinating. To reduce the

variability of an internal variable while not reducing
its uncertainty, keeping the units of measure constant,
would be a real tour-de-force, if not a contradiction in
terms.

          RM: I think you may be misunderstanding me because I

was misunderstanding you. I re-read your Editorial in
IJHCS and, again, found it to be excellent. I think my
problem resulted from reading into your thermodynamic
rationale for the necessity of PCT the implication that
control always involves a reduction in the variability of
the organism’s environment.

          In fact, you only talk about the importance of

stabilizing (reducing the variability of) the organism’sinternal environment; it’s “essential internal
chemistry”. But you go on to say that this is done by
controlling “important states of the outer world”. So I
think I mistakenly read this to mean that you were saying
that control also involves reducing the variability of the
external environment. But, in fact, you never said
that; you were correctly implying that control involves
reducing the variability of * perceptual aspects of the
environment* , keeping them from varying around their
“desired states”.

          RM: The demo I was describing ...would simply involve

having a participant control the variability of a
variable, such as the relative probability of occurrence
of a square (S) and a circle © presented in a temporal
sequence by the computer.

          RM: What this simple little demo would show (I presume)

is that people can control a perception related to the
variability of an external environmental variable. It
would also show that while control involves the
stabilization (reduction in the variability) of a * perception* of some environmental state of affairs it does not
necessarily involve the stabilization (reduction in the
variability) of the environmental correlate of that
perception (though it could, if the participant decides to
keep the variability of the sequence at a minimum).

          RM: While the demo is not relevant to your

thermodynamic justification for PCT I think it still might
be a worthwhile demo to build in order to demonstrate the
conceptual difference between stabilization of a
perception (keeping it close to a reference) and
stabilization of the environmental correlate of that
perception (which may involve an increase in the
instability of that variable). Understanding this
distinction would, perhaps, make it easier to understand
“thrill seeking” and the desire for “variety” from a PCT
perspective. Both seem to involve stabilizing
(controlling) a perception of “instability” at a reference
that is considerably greater than zero.

          RM: Of course, another good reason to develop the demo

would be because it could be the basis of a little
research project aimed at determining just what perception
is being controlled when people control variability,
assuming that they can control variability.

[From Rick Marken (2014.08.22.1540)]

···

On Fri, Aug 22, 2014 at 5:40 AM, Andrew Nichols anicholslcsw@gmail.com wrote:

AN: Bravo, Rick! Once again, the consummate scientist :slight_smile:

RM: Thanks Andrew. That feels particularly nice now that I have met you in person. I don’t think I really deserve such an accolade but I’ll take it. Actually, I’m trying to imitate Bill Powers, who was not only a great theorist but also a brilliant scientist and philosopher of science (not to mention a pretty darn good engineer). And by scientist I mean that Bill was very committed to testing his theoretical beliefs via experiment, as evidenced by his marvelous collection of computer demonstrations of control phenomena. Bill was not an arm chair theorist but a real deal scientist who was dedicated to subjecting his theories to critical empirical test.

RM: I think Bill’s commitment to science is illustrated by something I remember Bill saying at one of the first CSG meetings, back in about 1985 or 1986. The meetings then were held at a conference center in the country outside of Racine, Wisconsin. I remember walking around the grounds talking with Bill and some others during a break about what they thought was the biggest problem for human society and Bill answered, with hardly a pause, “belief”. What he meant, of course, were vaguely formulated and untested beliefs, such as those that are the basis of religion and ideologies in general. By untested Bill meant not subjected to empirical (observational) test. Bill’s approach to understanding was deeply skeptical, which is what the scientific attitude is: skepticism. Bill didn’t believe anything until it held up to empirical test; and even beliefs that had passed empirical tests he considered still tentative, including his belief in PCT.

RM: That’s why I said in my talk at Northwestern that the future of PCT is with people who do the science testing and developing the PCT model of purposeful behavior. PCT is not a finished product – a fine piece of crystal that can now be used to light the world – but a work in progress Just as physics didn’t stop when Newton published the Principia – indeed, it really just got going full blast-- the science of purposeful behavior shouldn’t stop with the publication of B:CP. Both works are very detailed, mathematical descriptions of beliefs about how certain phenomena are imagined to work. But unlike the Bible or other “authoritative” books that describe beliefs about how phenomena work, the beliefs described in scientific works, like the Principia and B:CP, stick around only if they are continuously subjected to empirical test and development based on those tests.

RM: Thanks again Andrew for the kind words. I wish I really were the consummate scientist. But having you think of me as one will just have to do for this lifetime;-)

Best regards

Rick

Andrew


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

On Thu, Aug 21, 2014 at 10:31 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2014.08.21.23.16]

[From Rick Marken (2014.08.21.1920)]

Refrigerators do control entropy, at least if the refrigerator

contents don’t change, and that’s all they do, but I’m confused as
to why you would want to bring control of entropy into this
discussion. Most control systems are controlling other variables.
You are the only one who has talked about controlling entropy.

That would be truly fascinating. To reduce the variability of an

internal variable while not reducing its uncertainty, keeping the
units of measure constant, would be a real tour-de-force, if not a
contradiction in terms.

Remember that if a variable is distributed with a normal Gaussian

distribution of standard deviation S, the uncertainty is given by U
= log2(sqrt(2pie)*S) (Shannon, 1963 edition p 89). In other words,
if the control ratio (RMS expected)/(RMS actual) is, say 8, the
uncertainty (entropy) is reduced by 3 bits.

I do not see how you will create a demo of a control system that

actually stabilizes a variable while not reducing its entropy
(uncertainty). But if you can pull it off, you might get a Nobel
Prize.

Martin

Philip (8.20.14 @ 20:01)

              PY: What is the purpose here?  Are we trying to

establish exactly “why PCT is necessary”?

          RM: In this specific case my goal was to try to get to

a point where I could develop a working model of what
Martin was proposing and/or write a computer demonstration
of how control of entropy worked (or didn’t).

          Although these arguments ramble all over the place this

one actually gave me a great idea for a demo that I may
implement if I get the time. The demo would show that
control does not necessarily involve a reduction of
entropy, where entropy would be measured as Shannon’s
information measure, H =- Sum (p.i * log(p.i)).

[From Rick Marken (2014.08.22.1940)]

···

Martin Taylor (2014.08.22.14.41)–

MT: That's true, but as you often point out, controlling a perceptual

(internal) variable also implies reducing the variability of the
corresponding environmental variable.

RM: I would rather say that controlling a perceptual variable implies controlling the environmental correlate of the perceptual variable. The reason for this slight change in terminology is to emphasize the fact that there may be no environmental variable that corresponds to the controlled perceptual variable. For example, in B:CP (p. 112 of 2nd edition) Bill gives the example of controlling for the perception of the taste of lemonade. That perception is a construction (by a perceptual function) “…derived from the intensity signals generated by sugar and acid (together with some oil smells)… the mere intermingling of these physical components has no special physical effects on anything else, except the person tasting the mixture.” The same is true of many other perceptions that we control, such as the perception of variability (if we can perceive it). The variability of independent events that happen with different probabilities is an aspect of these events that we can perceive but it has no more physical significance than some other aspect of these events that we can perceive, such Morse code patterns.

RM: I believe the section of B:CP that I quoted from, on p. 112, is relevant to the discussion of “maps and territories”. The “philosophical fact” that emerges from the PCT model of perception is that a perception is not a “map” of the “territory” of physical reality. Rather, perceptions are constructions based on physical reality. Our best guess about what physical reality is is the model of that reality given to us by chemistry and physics. That model says that reality is made up of electromagnetic, gravitational and mechanical forms of energy which stimulate our sense organs and generate neural signals. But that reality doesn’t contain things like the taste of lemonade or the adorableness of a child. Those are perceptions created for us by our perceptual functions.

RM: I believe that we perceive reality as we do, not because it is the best “map” of reality but, rather, because it has adaptive significance; the way perceive the world allows us to control what we need to control in order to successfully reproduce and survive. from a PCT perspective, perception is not a map of the territory of reality; it IS the territory or reality. That’s why one of the main goals of PCT research is to determine what perceptual variables organisms control. The goal is to find out the nature of the territory – the reality – in which organisms control. Are there a fixed number of classes (types) of perceptions that are controlled and are these classes organized into a hierarchy (as suggested by the hypotheses of a hierarchy of control systems controlling different types of perceptions)? As I said in my (unfortunately way too rambling) talk at Northwestern, this should be the focus of research on the PCT model of purposive behavior.

MT: The variability of something is a degree of freedom in itself. If

that’s the perception you control, then that’s the environmental
degree of freedom that is made more stable than it would otherwise
be.

RM: Yes, if the perception of variability is controlled then that aspect of the environment – the aspect that is constructed a neural network implementing a formula such as s2 = Sum(X-M)2/N, where X represents the sensed environmental components from which the variance is constructed – is stabilized relative to a reference. But, of course, depending on what the reference for the perception of variance is, the controller will be seen as either acting to maintain a low (with a low reference) or high (with a high reference) level of variance of the events, X, in the environment.

MT: Actually, if the perception is a true function of the environmental

state, stabilizing one implies stabilizing the other. If what you
are stabilizing is a rate of variation of a perception, then you are
stabilizing the equivalent rate of variation in the CEV defined by
that perceptual function. When you do the TCV, it works only because
you (experimenter) can see something you guess might be the
corresponding CEV. If it stabilizes when the subject seems to be
controlling a perception, then at least your guess is related to the
perception. The more it stabilizes, the closer the relation. If that
were not so, then the TCV would be impossible.

RM: Yes, of course. But in this case the CEV is the variance of events.(I would rather called it a CAE – controlled aspect of the environment; what you call a CEV is, like the taste of lemonade or the variance of events, not necessarily an environmental variable). So when the variance perception is stabilized at a low value the environmental correlate of that variable – the CAE, which is the variance of the events X – varies little. When the variance perception is stabilized at a high value the CAE varies a lot. In both cases the CAE is stabilized relative to the reference but in one case the CAE – the variance of events – is low while in the other it is high.

MT: Also good to do, but I would guess that it would be rather hard to

do, given the trade-off between precision kn value and precision in
time (it’s another of those complementary Heisenberg pairs). I’m
guessing that unless you somehow constrained them, subjects would be
inconsistent in how they balanced between setting “the variability
up to now” precision, and the precision of “now”. You can’t get an
accurate measure of variability from one sample, or even from three
or four, so the question is of how many samples the subject would be
including in the estimate of variability. The weighting of those
samples determines how far back was “now”.

RM: Exactly right. These are all possible differences in the way participants might construct the perception of variance: they might use a weighted average of events; they might use different periods over which the averaging is done. My expectation is that, while there may be fairly large differences across participants, each individual participant will compute the variance perception using the same perceptual function. Some might not even be able to perceive it or there might be some interesting learning going on. Perhaps this little project could give us a window into the development (learning) of a perceptual function. I think that this type of experiment could be the basis of a really interesting PCT research program.

          RM:  I

think I mistakenly read this to mean that you were saying
that control also involves reducing the variability of the
external environment. But, in fact, you never said
that; you were correctly implying that control involves
reducing the variability of * perceptual aspects of the
environment* , keeping them from varying around their
“desired states”.

          RM: The demo I was describing ...would simply involve

having a participant control the variability of a
variable, such as the relative probability of occurrence
of a square (S) and a circle © presented in a temporal
sequence by the computer.

          RM: What this simple little demo would show (I presume)

is that people can control a perception related to the
variability of an external environmental variable. It
would also show that while control involves the
stabilization (reduction in the variability) of a * perception* of some environmental state of affairs it does not
necessarily involve the stabilization (reduction in the
variability) of the environmental correlate of that
perception (though it could, if the participant decides to
keep the variability of the sequence at a minimum).

          RM: Of course, another good reason to develop the demo

would be because it could be the basis of a little
research project aimed at determining just what perception
is being controlled when people control variability,
assuming that they can control variability.

Best regards

Rick


Richard S. Marken, Ph.D.

Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble