CHOICE THEORY

[From Bruce Gregory (2005.0128.1721)]

Rick Marken (2005.01.28.1400)

Bruce Gregory (2005.0128.1553)]

What remains to be spelled out is a way to tell the difference between
genetically inherited and learned needs. A new test?

Boy, those ideas just keep coming faster and faster.

No, just same idea futilely trying to get recognition. When you get the
time, I'd appreciate a response to:

[From Bruce Gregory (2005.0125.1513)]

Rick Marken (2005.01.25.1150)

Bruce Gregory (2005.0125.1420)--

Rick Marken (2005.01.25.1100)

What observations could you make that would lead to
rejection of the model?

Exactly the same observations that Bill uses to test the extent
awareness plays a role in reorganization.

You must mean that the same _type_ of observations that Bill uses
could be
used to test your model. I would like to know what specific
observations (of
that type) would lead to rejection of your model.

When I drive out the driveway, I know the major turns I will make to
take me on to Route 9 even before I reach the intersection, therefore I
must be predicting/anticipating/imagining what I will encounter. If I
encountered something else I would be surprised. If I encountered a
giant crater where Route 9 was and I was not surprised, I would reject
the hypothesis that I was anticipating what I would encounter.

When I open the refrigerator door, I know what I expect to see. I must
therefore be making a prediction before I open the door. If one day I
realized, just before I opened the refrigerator door, that I had
absolutely no idea what I expected to find, I would reject the
hypothesis that I anticipate what I will find before I open the door.

Shall I go on? Or is that enough to start with?

The enemy of truth is not error. The enemy of truth is certainty.

[From Bill Powers (2005.01.28.1705 MST)]

Bruce Gregory (2005.0128.1721)--

When I drive out the driveway, I know the major turns I will make to
take me on to Route 9 even before I reach the intersection, therefore I
must be predicting/anticipating/imagining what I will encounter.

I would say you are imagining (playing back a recorded sequence in this
case). I would not say you are predicting, and I don't know what you mean
by anticipating or what I should mean by it so I pass on that word.

When I open the refrigerator door, I know what I expect to see. I must
therefore be making a prediction before I open the door.

I would not call that predicting, but imagining or remembering. A
prediction would be perception of a predicted state of the refrigerator:
"there will be milk in the refrigerator when I open the door."

Surprise has nothing in particular to do with prediction. Surprise is a
subjective reaction to an unusually large error. Prediction is a kind of
calculation of the future based on past data and some sort of algorithm. If
surprise is involved, it would be based on an expectation that the
prediction will be correct. You can make a prediction without expecting it
to be correct.

If one day I realized, just before I opened the refrigerator door, that I had
absolutely no idea what I expected to find, I would reject the
hypothesis that I anticipate what I will find before I open the door.

It is perfectly possible that you could say to yourself, "There is going to
be milk on the top shelf at the left when I open the door." However, you
could be in process of opening the door to get the milk when you say that
to yourself; the prediction has little to do with the control process. It
is also possible to open the refrigerator door and get the milk out without
making any prediction at all: you just run off the familiar sequence. Of
course if there is no milk there, you might experience a sudden error in
the sequence and thus be surprised: what the heck happened to that full
half-gallon that you just put in there 10 minutes ago?

If it's a refrigerator at a friend's house, you could open the door to get
the milk if it's there, without having any reason to expect it to be there.
Not expecting it to be there is not the same as expecting it not to be there.

I am trying to say that while prediction does happen, it is an essential
part of control only under special circumnstances -- where you are in fact
controlling a perception which is a prediction based on current data, and
could not carry out the control without it.

Of course you can always use the word prediction in such an abstract way
that any perception involving time-dependent variables "could be said" to
involve prediction. But that sort of analysis just trivializes the discussion.

Shall I go on? Or is that enough to start with?

I think enough has been said on this subject to satisfy me for now. Sorry
bot butting in ahead of Rick.

Best,

Bill P.

[From Rick Marken (2005.01.28.1620)]

Bruce Gregory (2005.0128.1721)

Rick Marken (2005.01.28.1400)

Bruce Gregory (2005.0128.1553)]

What remains to be spelled out is a way to tell the difference between
genetically inherited and learned needs. A new test?

Boy, those ideas just keep coming faster and faster.

No, just same idea futilely trying to get recognition.

I think you would feel more like your ideas are being recognized if you
would develop them in more detail. How, for example, would you test to
determine whether a variable was intrinsic or learned? What exactly would
you do? Could you describe a prototype experiment that could implemented in
terms of a concrete demonstration, like the rubber band or computer demos?

When you get the time, I'd appreciate a response to:

From Bruce Gregory (2005.0125.1513)

When I drive out the driveway...

When I open the refrigerator door..

These aren't questions. What response canst thou have to these?

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Rick Marken (2005.01.28.1640)]

Bill Powers (2005.01.28.1705 MST)--

Bruce Gregory (2005.0128.1721)--

When I drive out the driveway...

I would say you are imagining

When I open the refrigerator door..

I would not call that predicting

Surprise has nothing in particular to do with prediction. Surprise is a
subjective reaction to an unusually large error.

If one day I realized

I am trying to say that while prediction does happen, it is an essential
part of control only under special circumnstances -- where you are in fact
controlling a perception which is a prediction based on current data, and
could not carry out the control without it.

Shall I go on? Or is that enough to start with?

I think enough has been said on this subject to satisfy me for now. Sorry
bot butting in ahead of Rick.

No problem. I think you did about as well as I would have done (had I
understood what Bruce wanted). Well, OK, you actually did orders of
magnitude better. Nice.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2005.01.29.0645 MST)]

Martin Taylor [2005.01.28.12.45] --

It's hard to be certain whether an difference we have is simply a difference of wording. I rather thank that we see the issue of "survival" pretty much the same way.

I think so, too. It's related (a sideways thought) to the issue of fear of death. We can't really fear death on the basis of experience; the fears are all in what we imagine, mostly the process of getting dead. So "survival" is just what we do all the time; there's nothing special about it unless we start obsessing on the subject. As you might say, the goal of survival is in a meme, not a gene.

However, I don't think that's all there is to issues of evolution.

No, I don't either. But the how of evolution and coevolution is a matter for theory. Evolution itself is a fact: the changes in form that occur over time. It's as much a fact as anything else we think we know, such as the shape of the earth's orbit. What remains conjectural is the theory of how evolution happens. I have the idea that it's an action by organisms, not something that just happens to organisms because of selection pressures (or that God does). Well, you've read my paper on that so I don't have to elaborate.

If we think of selection pressures as disturbances and mutations as the means of counteracting them (in the E. coli style of reorganization), then I think all the same considerations (such as altruism) can be dealt with as evolutionists now try to deal with them, but much more believably. Evolutionary models (genetic algorithms) are horribly inefficient even when grossly simplified, and all the examples of them that I've seen require the modeler to play "hot and cold," telling the organisms who is closer to the goal and therefore gets to reproduce, and who dies. So the modeler builds in a guidance system. Do you know of any purely random genetic algorithms that work well enough to be useful?

It would be easy to take the E. coli model and present it in two forms: one in which the bacterium simply changed directions at random in a way unconnected to its position relative to the goal-circle, and the other one that changes directions randomly but at intervals that depend on the distance from the goal. Rick Marken, are you listening? My prediction is that the former would be vastly slower than the latter -- one or two orders of magnitude, maybe more (depending on the size of the goal circle). Martin, if you had such a model in hand, do you think it could be the basis for a paper by you in the evolutionary literature?

If we could establish the E. coli model as a viable alternative to classical natural selection, a new line of models could then be applied to problems like altruism and sexual reproduction. What do you think?

Best,

Bill P.

[From Bill Powers (2005.01.29.0798 MST)]

Bruce Gregory (2005.0128.1553)]

The distinction is clear. What remains to be spelled out is a way to
tell the difference between genetically inherited and learned needs. A
new test?

I think the test is pretty simple. No organism can live more than a short time relative to its species' life-span if its needs are not met. Most organisms can live if their learned wants are not met -- they can find other things to want that have the same effects on intrinsic state, or they can simply do without (biycles). This is according to my definitions of needs and wants, of course. I'm not sure that any other definitions would offer such a clear-cut test.

Best,

Bill P.

···

The enemy of truth is not error. The enemy of truth is certainty.

[From Rick Marken (2005.-01.29.1640)]

Bill Powers (2005.01.29.0645 MST)--

It would be easy to take the E. coli model and present it in two forms: one in which the bacterium simply changed directions at random in a way unconnected to its position relative to the goal-circle, and the other one that changes directions randomly but at intervals that depend on the distance from the goal. Rick Marken, are you listening? My prediction is that the former would be vastly slower than the latter -- one or two orders of magnitude, maybe more (depending on the size of the goal circle). Martin, if you had such a model in hand, do you think it could be the basis for a paper by you in the evolutionary literature?

I've kind of done that already in my Selection of Consequences demo at:

http://www.mindreadings.com/ControlDemo/Select.html

The "Control" button runs the basic E. coli model. The Reinforcement button runs your suggestion: direction os movement is changed randomly in a way that is unconnected to the position of the bacterium (dot) relative to the goal circle.

A better model of Darwin's Hammer type evolution would probably involve a modification of the Reinforcement model so that there are random changes in direction of movement of the dot (the directions being the phenotypic variations resulting from genetic mutations) at random times (after life spans of random duration) except when the direction is straight at the target (the "successful" mutation).

Best regards

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2005.01.29.1830 MST)]

Rick Marken (2005.-01.29.1640)

A better model of Darwin's Hammer type evolution would probably involve a modification of the Reinforcement model so that there are random changes in direction of movement of the dot (the directions being the phenotypic variations resulting from genetic mutations) at random times (after life spans of random duration) except when the direction is straight at the target (the "successful" mutation).

Uh-uh. The successful mutation is the one that gets TO the circle. How would natural selection know that moving in a certain direction is going to get the dot to the circle? Even if it's headed right at the circle where the food is, it could die of starvation before it gets there.

This is what I meant about genetic algorithms in which the programmer puts guidance in. Heading toward food is not the same as getting to the food.

Better check out your demo -- it doesn't work the way the instructions say it does. Apparently, both "control" and "demo" turn on models that move the bug. Pressing the space bar makes the bug jump to a new position rather than changing its direction of movement.

However, it's pretty clear that the "reinforcement" model, which simply changes directions at random and at random times, will almost never get to the circle, while the "control" model always gets to it quite rapidly. "Self-directed evolution" would clearly be many orders of magnitude more efficient than random mutation and natural selection in the Darwinian manner.

Best,

Bill P.

[From Rick Marken (2005.01.30.0950)]

Bill Powers (2005.01.29.1830 MST) --

Rick Marken (2005.-01.29.1640)--

A better model of Darwin's Hammer type evolution would probably involve a modification of the Reinforcement model so that there are random changes in direction of movement of the dot (the directions being the phenotypic variations resulting from genetic mutations) at random times (after life spans of random duration) except when the direction is straight at the target (the "successful" mutation).

Uh-uh. The successful mutation is the one that gets TO the circle. How would natural selection know that moving in a certain direction is going to get the dot to the circle? Even if it's headed right at the circle where the food is, it could die of starvation before it gets there.

I was thinking of direction toward the circle as the mutation that results in a phenotype that fits the current environmental "niche". Once that mutation occurs it is passed on to future generations. So the dot moving toward the circle is the species that works. All other directions (regardless of how close to the "niche" direction) die out after leaving some mutated offspring (new direction) after some random time of existence. I'm sure this algorithm would be _far_ less efficient than the E. coli algorithm at producing the "successful" mutation. But I can see that this doesn't really make sense, since it is really "hitting the target" that is the analogy of the "successful niche mutation" in this spatial analogy of evolution.

Better check out your demo -- it doesn't work the way the instructions say it does. Apparently, both "control" and "demo" turn on models that move the bug. Pressing the space bar makes the bug jump to a new position rather than changing its direction of movement.

Are you sure you are pressing the space bar and not the "click" bar under the mouse pad? The demo works fine when I use the space bad to "tumble" the dot in the "Subject" version of the demo. Pushing the "click" bar in any of the three versions resets the program , hence the change in position of the dot.

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2005.01.30.1130 MST)]

Rick Marken (2005.01.30.0950)–

I was thinking of direction
toward the circle as the mutation that results in a phenotype that fits
the current environmental “niche”. Once that mutation occurs it
is passed on to future generations. So the dot moving toward the circle
is the species that works. All other directions (regardless of how close
to the “niche” direction) die out after leaving some mutated
offspring (new direction) after some random time of existence. I’m
sure this algorithm /would be far less efficient than the E. coli
algorithm at producing the “successful” mutation. But I can see
that this doesn’t really make sense, since it is really “hitting the
target” that is the analogy of the “successful niche
mutation” in this spatial analogy of evolution.

Yes, I can see that you could define getting on the right directional
line to be the successful mutation. But in all applications of the
genetic algorithm that I’ve see, it’s getting to the circle that really
matters. For example, Beer (I think) produced a model in which
cockroaches mutated to get to a place where food was. The bugs that
turned in the right direction were allowed to reproduce without actually
getting to the food, which is not how real natural selection would
work.
To be realistic, you’d have to calculate the probability that the bug
would travel the whole distance to the circle without tumbling
again.
I just thought of this. Suppose you made a model with small bits of food
scattered sparsely through a large space. You allow bugs to reproduce
that travel in a certain direction, the direction toward some food from
where they are. The successful bugs get to the food, and eat it, and
reproduce then and there. What do the offspring do then?
If they have inherited a tendency to travel in a certain direction, they
will head off in the same direction that the parent was moving, which
will take them directly away from where the food was. If they inherit a
tendency to move toward a particular place from wherever they are, they
will head to the place where the food was before it was eaten. So it
would not be useful to inherit either tendency. Exactly what would the
offspring have to inherit in order to be successful? Certain not the
parent’s, or parents’, behavior.

RE: Demo

Are you sure you are pressing
the space bar and not the “click” bar under the mouse pad?

All right, now I understand. If I click on “subject,” nothing
is supposed to happen because of doing that – it merely makes the space
bar effective in causing random tumbles, and starts the bug moving. If I
click on “control” or “reinforcement,” this does not
mean I am controlling or reinforcing, but that a controlling or
reinforcing model is producing the movements.
It would be nice to let the runs go on much longer, to show that the
reinforcement model is not going to reach the goal unless you wait about
a year. Also, if you let the movements wrap around in X and Y, or bounce
off the boundaries as in Pong, you will increase the efficiency to the
point where the reinforcement model might reach the goal once in a
while.
In the reinforcement model, what action is supposedly reinforced? The
discussion says,
In this study reinforcement is movement of the bug relative to the
target: movement away from the target is strongest reinforcement because
it results in the largest increase in the probability of the response
(bar press) that produces it and movement toward the target is the
weakest reinforcement because it results in no increase in the
probability of the response that produced it.

I assume you’re talking about reinforcement of the human participant in
this experiment. Does it apply to the “reinforcement” model
when it’s running? I guess you noticed that the actual result of the
tumbling behavior is the opposite of what is reinforced.

Best,

Bill P.

Best,

Bill P.

[From Rick Marken (2005.01.31.0840)]

Bill Powers (2005.01.30.1130 MST)

It would be nice to let the runs go on much longer, to show that the
reinforcement model is not going to reach the goal unless you wait about a
year. Also, if you let the movements wrap around in X and Y, or bounce off the
boundaries as in Pong, you will increase the efficiency to the point where the
reinforcement model might reach the goal once in a while.

Good suggestions. Will do.

In the reinforcement model, what action is supposedly reinforced? The
discussion says,

In this study reinforcement is movement of the bug relative to the target:
movement away from the target is strongest reinforcement because it results in
the largest increase in the probability of the response (bar press) that
produces it and movement toward the target is the weakest reinforcement
because it results in no increase in the probability of the response that
produced it.

I assume you're talking about reinforcement of the human participant in this
experiment. Does it apply to the "reinforcement" model when it's running? I
guess you noticed that the actual result of the tumbling behavior is the
opposite of what is reinforced.

I think I know what you mean. The subject should lay off the space bar after
a "reinforcement" (tumble that results in movement toward the target) but
the probability of pressing is actually increased by the reinforced result,
so the reinforcement is working "against" getting to the target.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.