Creating PCT-based social-economic agents

[From Frank Lenk (2010.03.03.11.20 CST)]

Have achieved at least a reasonable milestone in my work
projects, I am taking a day or two off to think more seriously about my
dissertation. In particular, I want to try to create my first set of
PCT-based agents, and I am looking for advice. The minimum number agents I
need is two.

That in itself is a profound statement. Most economic
models start from the standpoint of individual people maximizing utility or
individual firms maximizing profits. There are lots of “Robinson
Crusoe” models that, to focus on what is essential about the human
condition and economic problems in general, create as their atom of analysis an
isolated individual on a deserted island.

My vantage point is different. I believe humans are inherently
social. We don’t exist without others – in fact, we can’t (at
least until the advent of human cloning). Even Adam Smith thought that the defining
characteristic of humans was their capacity for what he called “Sympathy”
for others, though today we would translate this more as “empathy”.
(See Smith’s Theory of Moral Sentiments for more, a book he
thought was more important than his Wealth of Nations.)

In my view, the atom of a social-economic analysis is the
family. I think that most of what we call economic behavior is really about
either finding someone to start a family with or satisfying the needs of our families
once we have them. In PCT terms, the economy is (part of the) behavior that
results from attempts to control our perceptions to our references for family. (This
may also apply to the underground economy, if books like The Godfather
are to be believed).

I think this helps explain why there is no concept of “enough”
in economics. The concept of enough applies to families – when are they
safe enough? When do my children have enough opportunity? We can show
that in general, kids who ride around in bigger cars and live in bigger homes
are safer, and also get to go to better schools and have more options for
better-paying jobs. It can be then be considered sensible to want more of these
things. If our reference signals are for perfect safety and infinite
opportunity, it may even be reasonable to want a lot more of them.

Yet, even with these as reference signals, it is not clear to me
why we seem to think that “more stuff” is the surest way to achieve
them. One can imagine a hypothetical alternative society that has the
same references for ends, but not means – choosing instead to focus on greater
community cohesion, breaking up the concentrations of poverty that create the
high crime in the first place, conserving scarce resources rather than
rebuilding on new land what has been abandoned in urban cores, and investing
more adult time rather than adult money in their youth – and finding that
this yielded equally good outcomes.

So, I am interested in where our reference signals come from. Some
are undoubtedly biologically based (which? I will need to come back to
this later). But some reference signals must be at least passed from
parent to child (hence the reason why I say my minimum number of PCT-based
agents is two). Some are also likely passed from the “society”
to the parents, but for now, I am leaving a discussion of how to model that for
another day (such influence might be modeled as arising from the network of
people with whom we interact, for example).

George Herbert Mead has this idea of a “social self”
that intrigues me. I may not be getting this exactly right, but in
essence I believe this means that we learn to be ourselves not from controlling
our actions directly, but from controlling others’ reactions to our
actions. We change our actions until we get the desired reaction from others.
Initially, a child doesn’t know whether hitting another child is right or
wrong. But if the hitting elicits a scolding from mom, the child changes
actions until he/she finds a method of play that does not involve
hitting. Eventually, this alternative style of play becomes the child’s
reference own internal reference signals.

It is as if we have a relatively higher gain for others’
reactions than for the direct consequences of our actions, at least for some of
them (I suspect that “peer pressure” works at least somewhat
similarly). This was brought home to me in simple way not long ago.
I snore. So I as I fall asleep, I snore a little as I relax. If I am not
quite completely asleep, some part of me is aware that I am snoring, but left
to my own devices, I don’t react to it. My wife, however, hates my
snoring. So as I begin to snore, she moans in response. THAT I react to and
turn over – not my actions, but her reaction to my
actions. And gradually, I find myself more aware of my snoring and
(sometimes) I turn over before she moans.

We seem exquisitely tuned to be sensitive to the reactions of
others. Even in the midst of a conversation, I can sense myself changing what I
say and how I say it in response to subtle clues about whether someone agrees
or disagrees with me.

So my question is how do I begin to model this kind of thing? I
am hoping the “old hand” at PCT modeling could give me some
suggestions. I plan to do my programming initially in the agent-based modeling framework
NetLogo. (For those who are interested, it is available as a free
download here: http://ccl.northwestern.edu/netlogo/.
But this shouldn’t constrain anyone from feeling free to
offer me whatever advice they see fit.

Frank

[From Bill Powers (2010.03.04.0808 MST)]

Frank Lenk (2010.03.03.11.20 CST) –

FL: In my view, the atom of a
social-economic analysis is the family.

I think this helps explain why
there is no concept of “enough” in economics. The concept of enough
applies to families – when are they safe enough? When do my children have
enough opportunity? We can show that in general, kids who ride
around in bigger cars and live in bigger homes are safer, and also get to
go to better schools and have more options for better-paying
jobs.

BP: Could it be that this is a recipe for positive (error-increasing)
feedback? Why is there such a concern for safety? Could it be that riding
around in big cars and living in big houses is an invitation to envy and
resentment by others (whether real or only imagined by the well-off)
which lead to feeling even less safe?

FL: It can be then be considered
sensible to want more of these things. If our reference signals are
for perfect safety and infinite opportunity, it may even be reasonable to
want a lot more of them.

BP: Yes. It’s the nature of positive feedback that the goal recedes as
one approaches it, so what begins as a slight inclination ends as an
all-out act of desperation. The richer you get, the more you have to lose
and the more you fear losing it. Imagine being given a delicate eggshell
beautifully painted and gilded by Picasso or Matisse, one of a kind and
worth a million dollars. Would you dare walk around with it in your
pocket or in your hand? Would you take it to a bar with you and show it
to other people? Would you advertise the place where you keep it at
night? Even hiring a guard would be self-defeating because the presence
of the guard would be a message that there is something highly valuable
here. You’d have to be terribly concerned about cracking the eggshell, or
dropping it, or dropping something else onto it, or having a pet step on
it or bump it off the table, or a child play roughly with it. In the end
you might have to build a secret room for it known only to you, where you
can go to look at the egg once in a while, all by yourself (when people
won’t wonder where you are and go looking for you). And you might
hesitate to visit the egg yourself, for fear of getting nervous and
having an accident with it. Everything you do to protect the eggshell
only makes you worry about it more.

FL: Yet, even with these as
reference signals, it is not clear to me why we seem to think that “more
stuff” is the surest way to achieve them. One can imagine a
hypothetical alternative society that has the same references for ends,
but not means – choosing instead to focus on greater community cohesion,
breaking up the concentrations of poverty that create the high crime in
the first place, conserving scarce resources rather than rebuilding on
new land what has been abandoned in urban cores, and investing more adult
time rather than adult money in their youth – and finding that this
yielded equally good outcomes.

BP: You may have put your finger on an important non-ideological reason
to avoid radical inequalities in a society: they don’t achieve what they
are supposed to achieve for or against either side. For any kind of goal,
the action you take to reach it could be moving you, or it, in the wrong
direction. If you don’t notice it’s doing that, you’ll think something is
disturbing the controlled variable and try to act even more strongly
against the disturbance – only your own action is the disturbance and is
making the error worse.

FL: So, I am interested in where
our reference signals come from. Some are undoubtedly biologically based
(which? I will need to come back to this later). But some
reference signals must be at least passed from parent to child (hence the
reason why I say my minimum number of PCT-based agents is two).
Some are also likely passed from the “society” to the parents, but for
now, I am leaving a discussion of how to model that for another day (such
influence might be modeled as arising from the network of people with
whom we interact, for example).

BP: This question keeps coming up, but it involves a false assumption.
The false assumption is that reference signals are set once and for all.
They’re not: they have to be adjustable, continually adjustable, to allow
higher systems to vary them as a means of achieving higher
goals.
What you mean here is perceptions, not reference signals. Where do our
ways of perceiving things come from? Some are biologically based, some
are acquired, some are modifications of biologically-based ways of
perceiving. The reference setting is a different matter: that determines
how much of a given perception is desired. Once we get our
perceptions organized they become semi-permanent as kinds of perceptions.
We may learn that being wealthy is a sign of some kind of superiority,
but that doesn’t mean we will forever want to be superior. We may find
that being superior is no fun – too much is expected of you – and
try to avoid being superior. So we set the reference level for
superiority at a low level.

On the other hand, we may be lucky enough to reorganize and cease to
perceive superiority as a human attribute, or as existing at all.
Superiority as an automatic way of perceiving then disappears, taking
with it any reference levels that may have been set for it, as well as
other causal relationships with it.

FL: George Herbert Mead has this
idea of a “social self” that intrigues me. I may not be getting
this exactly right, but in essence I believe this means that we learn to
be ourselves not from controlling our actions directly, but from
controlling others’ reactions to our actions. We change our actions until
we get the desired reaction from others.

BP: A lot of the confusion over things like a “social self” are
cleared up if you stop wondering if such things really exist and remember
that they are perceptions. Of course I can perceive a self that is
adjusted to have a certain appearance to others, as best I can judge.
When that’s not a problem, there are other selves which are more simply
descriptive, and sometimes there’s no self at all – just a concern with
what’s going on.
And the observer of all these selves is something entirely different from
a self. It’s an Observer. Sometimes, in fact, there is no self until
someone asks about it. They say, “What do you think about
Oprah?” The first reaction may be “Who? Me? Well, I guess I
think she’s a pretty good person, though I don’t know that much about her
…” You have to look and see what thoughts are there, though when
asked there was nobody, just then, thinking them.

A self as a permanent identity is different from a self as a perceptual
construct.

FL: Initially, a child doesn’t
know whether hitting another child is right or wrong. But if the
hitting elicits a scolding from mom, the child changes actions until
he/she finds a method of play that does not involve hitting.
Eventually, this alternative style of play becomes the child’s reference
own internal reference signals.

BP: I’d say that right and wrong become perceptions and that the child
learns to control them. Some children choose a low setting for a
perception of rightness, or a high setting for perception of wrongness
(deliciously bad). However, if the child chooses a low reference level
for the things perceived as a state of rightness, or tries out how it
feels to seek wronginess, the consequences may lead to enough intrinsic
error to cause reorganization.

Other children (and parents) don’t see the world in terms of right and
wrong, so the question of how much rightess or wrongness is wanted
doesn’t apply.

FL: It is as if we have a
relatively higher gain for others’ reactions than for the direct
consequences of our actions, at least for some of them (I suspect that
“peer pressure” works at least somewhat similarly). This was
brought home to me in simple way not long ago. I snore. So I
as I fall asleep, I snore a little as I relax. If I am not quite
completely asleep, some part of me is aware that I am snoring, but left
to my own devices, I don’t react to it. My wife, however, hates my
snoring. So as I begin to snore, she moans in response. THAT I
react to and turn over – not my actions, but her reaction
to my actions. And gradually, I find myself more aware of my
snoring and (sometimes) I turn over before she
moans.

BP: This is altruism, which puzzles some people. You have a high
reference level for your perception of your wife’s peace and comfort,
which is part of what loving her means. You have progressed beyond trying
not to snore just to keep her from complaining about it, and gone on to
the stage of wanting her not to be disturbed. A cynic might say that you
just don’t like moaning, and I suppose that’s possible, but it seems to
me that a person who could believe that explanation has never loved
anyone.

FL: We seem exquisitely tuned to
be sensitive to the reactions of others. Even in the midst of a
conversation, I can sense myself changing what I say and how I say it in
response to subtle clues about whether someone agrees or disagrees with
me.

BP: Being sensitive to – perceiving – the reactions of others and
learning to interpret them reasonably well does not say what reactions
you want them to be having. If you see that your actions are annoying
someone else, that may be exactly what you want. Kids do it to teachers
quite often. We use the term “being sensitive to” to imply that
we have a kindly, protective attitude toward the reaction, but that isn’t
necessarily or even often the case. The reference level for the person’s
reaction depends on a lot of other circumstances and your higher-order
perceptions.

FL: So my question is how do I
begin to model this kind of thing? I am hoping the “old hand” at PCT
modeling could give me some suggestions. I plan to do my programming
initially in the agent-based modeling framework NetLogo. (For those
who are interested, it is available as a free download here:

http://ccl.northwestern.edu/netlogo/
. But this shouldn’t
constrain anyone from feeling free to offer me whatever advice they see
fit.

BP: I love being asked to give advice because I’m full of it and as a
proponent of the Method of Levels I’m not supposed to force it on people.
Well, you asked and this isn’t therapy.
My advice is to model controlled variables first, not reference levels.
Decide what is being controlled first, then let higher levels of
control systems determine how much of it is wanted by varying the
reference signal. Once you have decided what is to be controlled, figure
out how it is to be controlled: what actions can affect it, and what
lower-level perceptions are used as a basis for perceiving it. And
finally, try to figure out why it is being controlled: what higher-level
perceptions depend on this perception and similar perceptions at the same
level.

You can also, less ambitiously, assume a reference level and simply try
to model one system at one level, as I do with tracking tasks. But I
suspect that in your application of PCT, hierarchical relationships will
be of interest.

Best,

Bill P.

[From Bill Powers (2010.03.04.1035 MST)]
Frank Lenk (2010.03.03.11.20 CST)
Frank, I’ve downloaded NetLogo and can’t see (yet) how it can be used to
construct a control-system model.
In the programming guide, an Agent is defined this way:
"The NetLogo world is made up of agents. Agents are beings that can
follow instructions. Each agent can carry out its own activity, all
simultaneously. "
Under Procedures we have:

···

============================================================================
In NetLogo, commands and reporters tell agents what to do. A
command is an action for an agent to carry out. A reporter
computes a result and report it.
Most commands begin with verbs (“create”, “die”,
“jump”, “inspect”, “clear”), while most
reporters are nouns or noun phrases.
Commands and reporters built into NetLogo are called primitives.
The NetLogo Dictionary has a complete list of built-in commands and
reporters.

Here are the “turtle-related” primitives from the
dictionary:
back
(
bk)

-at

-here

-on
can-move?
clear-turtles
(
ct)
create-

create-ordered-

create-ordered-turtles
(
cro)
create-turtles
(
crt)
die
distance
distancexy
downhill
downhill4
dx
dy
face
facexy
forward
(
fd)
hatch
hatch-

hide-turtle
(
ht)
home
inspect
is-
?
is-turtle?
jump
layout-circle
left
(
lt)
move-to
myself
nobody
no-turtles
of
other
patch-ahead
patch-at
patch-at-heading-and-distance
patch-here
patch-left-and-ahead
patch-right-and-ahead
pen-down
(
pd)
pen-erase
(
pe)
pen-up
(
pu)
random-xcor
random-ycor
right
(
rt)
self
set-default-shape
__set-line-thickness
setxy
shapes
show-turtle
(
st)
sprout
sprout-

stamp

stamp-erase

subject

subtract-headings

tie

towards

towardsxy

turtle

turtle-set

turtles

turtles-at

turtles-here

turtles-on

turtles-own

untie

uphill

uphill4

Note that there is no primitive called “turtle-perceives” or
“turtle-wants”.

**Here is the primitive called “distancexy”:

====================================================================
distancexy xcor ycor
Turtle Command
Patch Command**
Reports the
distance from this agent to the point (xcor, ycor).

The distance from a patch is measured from the center of the patch.
Turtles and patches use the wrapped distance (around the edges of the
world) if wrapping is allowed by the topology and the wrapped distance is
shorter.

if (distancexy 0 0) > 10
  [ set color green ]
;; all turtles more than 10 units from
;; the center of the world turn green.

===============================================================
This function reports the distance not to the turtle but to the program
that runs everything. It would be very awkward to make this distance
variable into a perception belonging to just one turtle, then compare
this distance to a reference distance and output a command to move in
some specific direction that depends on the error. I suppose it could be
done, but getting a PCT model to run would be a continual process of
trying to fool NetLogo into doing something it wasn’t designed to
do.

Here is a program example for making the turtle move to a
“patch” that has the lowest value for some
“patch-variable.”

============================================================

move-to patch-here  ;; go to patch center
let p min-one-of neighbors [patch-variable]  ;; or neighbors4
if [patch-variable] of p < patch-variable [
  face p
  move-to p
===================================================================

If the test comes out true, two commands are issued: face a
particular patch and move to that patch. You might get somewhere by
interpreting patches to mean states of mind or attitudes or something
like that, but the metaphors in NetLogo are all biased toward a
particular view of what behavior is, and it’s not the PCT view as far as
I have seen.

FL: … The minimum number
agents I need is two.

There are lots of “Robinson Crusoe” models that, to focus on what is
essential about the human condition and economic problems in general,
create as their atom of analysis an isolated individual on a deserted
island.

My vantage point is different. I believe humans are inherently
social.

BP: Then that belief is the first thing you have to set aside. You can’t
show with a model or in any other way that your belief is true if you
simply assume it’s true before you start constructing your model. All you
can do that way is to explore the implications that would follow IF the
belief were true. You still don’t know if it’s true, so you don’t know if
the deduced implications have any truth either. You’ll be wasting your
time.

If you look at the basic diagram for a PCT control system, you will find
that there is no function or signal called “control” in
it.

FL: We don’t exist without
others – in fact, we can’t (at least until the advent of human cloning).
Even Adam Smith thought that the defining characteristic of humans was
their capacity for what he called “Sympathy” for others, though today we
would translate this more as “empathy”. (See Smith’s Theory of
Moral Sentiments
for more, a book he thought was more important than
his Wealth of Nations.)

In my view, the atom of a social-economic analysis is the family. I
think that most of what we call economic behavior is really about either
finding someone to start a family with or satisfying the needs of our
families once we have them. In PCT terms, the economy is (part of the)
behavior that results from attempts to control our perceptions to our
references for family. (This may also apply to the underground economy,
if books like The Godfather are to be believed).

BP: Well, now you’ve told us what you intend to prove if you can, and I
for one don’t doubt that you’ll find a way to do it. It’s easy to prove
anything if you have control of the model and can make it do anything you
want, and can pick out only supporting evidence from the literature.

What’s hard is NOT assuming the truth of any particular conclusion, and
simply trying to make the model’s properties be as close to reality as
you can, all your beliefs about the ultimate outcome being ignored and
put out of mind to the extent humanly possible. That way is hard, but the
outcome is far harder to ignore or deny. If you do what you say above
that you intend, you will come up with a model demonstrating what you
mean by family-oriented behavior. Your conclusions from the behavior of
the model will convince all those who already believe that human beings
are inherently family-oriented, and none of those who don’t – even those
who don’t think it is but are willing to be persuaded if you could defend
your reasoning about the premises and the logic of your deductions. Of
course nothing will convince those who are not willing to be persuaded,
but it’s only the others who are worth convincing anyway. You’d better
not come on as begging the question with them.

In summary: I think some other language would suit your modeling efforts
a lot better than NetLogo would. You won’t get all the whistles and
bells, but you’ll have a lot more freedom to think your own way without
being forced into the channels provided by the designers of the language.
I suggest Basic or Pascal. You can go on from there later with Visual
Basic or Delphi (more whistles and bells).

And it would be best to put what you know to be true into the model, and
hope that when it runs, it will support what you believe it should
do.

Best,

Bill P.

[From Bruce Gregory (2010.03.04.2210 UT)]

Bill Powers (2010.03.04.1035 MST)

Frank Lenk (2010.03.03.11.20 CST)

FL: My vantage point is different. I believe humans are inherently social.

BP: Then that belief is the first thing you have
to set aside. You can't show with a model or in
any other way that your belief is true if you
simply assume it's true before you start
constructing your model. All you can do that way
is to explore the implications that would follow
IF the belief were true. You still don't know if
it's true, so you don't know if the deduced
implications have any truth either. You'll be wasting your time.

BG: I don't see any problem here. The individual simply needs to control a perception such as "cooperating with others." The reference level for this perception is set on "high" by a control system one level up in the hierarchy.

BP: What's hard is NOT assuming the truth of any
particular conclusion, and simply trying to make
the model's properties be as close to reality as
you can, all your beliefs about the ultimate
outcome being ignored and put out of mind to the
extent humanly possible. That way is hard, but
the outcome is far harder to ignore or deny. If
you do what you say above that you intend, you
will come up with a model demonstrating what you
mean by family-oriented behavior. Your
conclusions from the behavior of the model will
convince all those who already believe that human
beings are inherently family-oriented, and none
of those who don't -- even those who don't think
it is but are willing to be persuaded if you
could defend your reasoning about the premises
and the logic of your deductions. Of course
nothing will convince those who are not willing
to be persuaded, but it's only the others who are
worth convincing anyway. You'd better not come on
as begging the question with them.

BG: The hierarchy allows you to solve this problem quite simply. You postulate the existence of a perception such as "cooperating with others". The test for the controlled variable tells you if the individual is controlling this perception. (If the reference level for this perception is set to high by a level in the hierarchy above this level.) Frank's guess is simply that you will find this to be the case in a majority of human interactions. Or am I missing something here?

Sorry to spoil Rick's day.

Bruce

[Martin Taylor 2010.03.04.17.18]
[From Bill Powers (2010.03.04.1035 MST)] to

Frank Lenk (2010.03.03.11.20 CST)

Frank, I've downloaded NetLogo and can't see (yet) how it can be used to construct a control-system model.
...
In summary: I think some other language would suit your modeling efforts a lot better than NetLogo would. You won't get all the whistles and bells, but you'll have a lot more freedom to think your own way without being forced into the channels provided by the designers of the language. I suggest Basic or Pascal. You can go on from there later with Visual Basic or Delphi (more whistles and bells).

I'd argue against using a platform-dependent language, if you can. Platform-dependent languages reduce the ease of collaboration to some degree, and can cut some people off from trying out the resulting programs. If you need maximum speed, you probably do have to use a platform-dependent language, but given how quickly computing power improves, what needs maximum speed this year is a middle-of-the-road requirement a couple of years later.

At the moment, I'm looking into the freeware OpenLaszlo environment for developing control experiments and simulations. I haven't actually programmed anything in it yet, but I have been looking at the documentation <http://www.openlaszlo.org/documentation&gt;, and what I see looks promising. It is a Web-based system, but the applications can run either totally in the client machine or in a client-server relationship, whther the machine is Windows, Mac, Linux or something else that can run a modern browser. The main thrust of the language is to provide interactive visual (and auditory, I think) displays of various media, including video and procedurally constructed drawings.

To badly oversimplify Open Laszlo, programs are written in a declarative object-oriented language called LZX, which is pure XML. LZX serves as a wrapper of sorts for procedural code written in Javascript, whcih ism where the computational work is done. The LZX compiler emits either DHTML or Flash, which is what the client web browser sees. Timings are in msec, and are not dependent on the processor speed, provided the processor can keep up with the requests.

I have no idea whether the resulting code is likely to be fast enough to deal with simulations of reorganization in massive control hierarchies, but the little tutorial demos suggest that it should be adequate for the types of demos in LCSIII. I hope it won't be too long before I can be a bit more definite about whether this is true. If anyone else has tried OpenLaszlo, I would be glad to hear of your experience with it.

And as a final note, I repeat that OpenLaszlo is freeware and platform-independent.

Martin

[From Bill Powers (2010.03.04.1706 MST)]

Bruce Gregory (2010.03.04.2210 UT) –

BG: I don’t see any problem
here. The individual simply needs to control a perception such as
“cooperating with others.” The reference level for this
perception is set on “high” by a control system one level up in
the hierarchy.

BP: Frank is proposing that humans are inherently family-oriented,
meaning that they are born with this kind of perception and a nonzero
reference level for it, or else with some precursor equivalents that
inevitably mature into this family-orientation. I have no objection to
this proposition, but if you want to demonstrate or prove that it’s true,
you can’t begin by assuming it’s true (unless, I suppose, you have some
plan to prove it by the back door, showing that not assuming it leads to
contradictions – which isn’t easy to do).

I would agree that most people are family-oriented within some meaning of
that term, but that isn’t the way Frank put his idea: he proposed that
they are inherently that way, which is different from saying that
they learn or decide to be that way. For all I know that is inborn. But
that would have to emerge from the properties of the model before I could
say it’s true.

BG: The hierarchy allows you to
solve this problem quite simply. You postulate the existence of a
perception such as “cooperating with others”. The test for the
controlled variable tells you if the individual is controlling this
perception. (If the reference level for this perception is set to high by
a level in the hierarchy above this level.) Frank’s guess is simply that
you will find this to be the case in a majority of human interactions. Or
am I missing something here?

BP: If this were the subject, Frank would be saying that there is no need
to learn to be cooperative because we are inherently cooperative. So the
model doesn’t have to reorganize itself to control for cooperation –
it’s already that way from the start. The question isn’t whether some
people are family-oriented or cooperative; it’s how they got that way. To
show that they are born with those perceptions and references, the
modeler has to show that reorganization will NOT end up producing them in
the same way we learn not to stick our fingers into a fire. Inherent
means not learned or acquired.

BG: Sorry to spoil Rick’s day.

BP: Are you? That’s decent of you.

Best,

Bill P.

[From Bruce Gregory (2010.03.05.0140 UT)]

[From Bill Powers (2010.03.04.1706 MST)]

Bruce Gregory (2010.03.04.2210 UT) –

BG: I don’t see any problem
here. The individual simply needs to control a perception such as
“cooperating with others.” The reference level for this
perception is set on “high” by a control system one level up in
the hierarchy.

BP: Frank is proposing that humans are inherently family-oriented,
meaning that they are born with this kind of perception and a nonzero
reference level for it, or else with some precursor equivalents that
inevitably mature into this family-orientation. I have no objection to
this proposition, but if you want to demonstrate or prove that it’s true,
you can’t begin by assuming it’s true (unless, I suppose, you have some
plan to prove it by the back door, showing that not assuming it leads to
contradictions – which isn’t easy to do).

I would agree that most people are family-oriented within some meaning of
that term, but that isn’t the way Frank put his idea: he proposed that
they are inherently that way, which is different from saying that
they learn or decide to be that way. For all I know that is inborn. But
that would have to emerge from the properties of the model before I could
say it’s true.

BG: What does it mean to “learn” or “decide”? I thought reference levels were either inborn or the result of reorganization. Is there a PCT model of learning that does not involve reorganization? Is there a PCT model of deciding? Since you put so much faith in models, I think we should stick with the model. Or have I missed something very important? The only question to be answered would seem to me to be, is this reference level inborn or does it arise from reorganization? Answering that question would not be easy. In fact, I suspect it would prove impossible in practice. (As someone observed, possible in principle usually means impossible in practice.)

BG: The hierarchy allows you to
solve this problem quite simply. You postulate the existence of a
perception such as “cooperating with others”. The test for the
controlled variable tells you if the individual is controlling this
perception. (If the reference level for this perception is set to high by
a level in the hierarchy above this level.) Frank’s guess is simply that
you will find this to be the case in a majority of human interactions. Or
am I missing something here?

BP: If this were the subject, Frank would be saying that there is no need
to learn to be cooperative because we are inherently cooperative. So the
model doesn’t have to reorganize itself to control for cooperation –
it’s already that way from the start. The question isn’t whether some
people are family-oriented or cooperative; it’s how they got that way. To
show that they are born with those perceptions and references, the
modeler has to show that reorganization will NOT end up producing them in
the same way we learn not to stick our fingers into a fire. Inherent
means not learned or acquired.

BG: I suggest that Frank finesse the question and simply assume the agents are controlling for “cooperating with others.” After all, rational expectation theorists don’t spend a great deal of time worrying about the reasons that agents are rational.

BG: Sorry to spoil Rick’s day.

BP: Are you? That’s decent of you.

BG: Gee, that sounds almost sarcastic. Rick’s hostility must be catching. I hope I don’t come down with it.

Bruce

(Gavin Ritz 2010.03.05.16.30NZT)

[From Bruce
Gregory (2010.03.05.0140 UT)]

[From Bill Powers
(2010.03.04.1706 MST)]

Bruce Gregory (2010.03.04.2210 UT) –

BP: Frank is proposing that humans are inherently family-oriented, meaning that they are born with
this kind of perception and a nonzero reference level for it, or else with some
precursor equivalents that inevitably mature into this family-orientation. I
have no objection to this proposition, but if you want to demonstrate or prove
that it’s true, you can’t begin by assuming it’s true (unless, I suppose, you
have some plan to prove it by the back door, showing that not assuming it leads
to contradictions – which isn’t easy to do).

I would agree that most people are family-oriented within some meaning of that
term, but that isn’t the way Frank put his idea: he proposed that they are
inherently that way, which is different from saying that they learn or
decide to be that way. For all I know that is inborn. But that would have to
emerge from the properties of the model before I could say it’s true

GR: In terms of PCT I have proposed that
at the highest level the reference signals are never zero. However these reference
signals are always pain (fear signals) that is negative however how the asymmetry
of PCT works is when controlling this perception it comes out as a positive controlling
perception as in this case “family–orientation or cooperation”.

I have identified about 7 pain (fear) reference
signals that have many perceptions that can be controlled for (and always a non
zero reference level). One particular reference signal that is linked to
affiliation, cooperation, family orientation, socialization is the pain (fear) reference
signal of ostracization and alienation fears obviously these are closely linked
to the organism’s survival.

In some organism’s the controlling perception
does not have to become “family-orientation or cooperation” it may
become the opposite. For example a person might decide to become a hermit that
lives in a mountain cave and has nothing to do with other people because other (pain)
reference signals are also so called switched on. For example it may be the restriction
(pain) signal is also being controlled for and this is perceived as some sort
of freedom (so freedom is being controlled for). So this person chooses to live
away from people on a mountain in a cave living off hunting wild animals.

I believe that controlling perceptions as
related to these pain signals are a “One to many Mapping” so to
assume that cooperation is an inbuilt reference level does not sound like a
very robust way to look at individuals to me. It’s just one of many
perceptions we can control for.

BG: What does it mean to
“learn” or “decide”? I thought reference levels were either
inborn or the result of reorganization. Is there a PCT model of learning that
does not involve reorganization? Is there a PCT model of deciding? Since you
put so much faith in models, I think we should stick with the model. Or have I
missed something very important? The only question to be answered would seem to
me to be, is this reference level inborn or does it arise from reorganization?
Answering that question would not be easy. In fact, I suspect it would prove
impossible in practice. (As someone observed, possible in principle usually
means impossible in practice.)

BG: The hierarchy allows
you to solve this problem quite simply. You postulate the existence of a
perception such as “cooperating with others”. The test for the
controlled variable tells you if the individual is controlling this perception.
(If the reference level for this perception is set to high by a level in the
hierarchy above this level.) Frank’s guess is simply that you
will find this to be the case in a majority of human interactions. Or am I
missing something here?

BP: If this were the subject, Frank would be saying that there is
no need to learn to be cooperative because we are inherently cooperative. So
the model doesn’t have to reorganize itself to control for cooperation – it’s
already that way from the start. The question isn’t whether some people are
family-oriented or cooperative; it’s how they got that way. To show that they
are born with those perceptions and references, the modeler has to show that
reorganization will NOT end up producing them in the same way we learn not to
stick our fingers into a fire. Inherent means not learned or acquired.

BG: I suggest that Frank
finesse the question and simply assume the agents are controlling for
“cooperating with others.” After all, rational expectation theorists
don’t spend a great deal of time worrying about the reasons that agents are
rational.

BG: Sorry to spoil Rick’s day.

BP: Are you? That’s decent of you.

BG: Gee,
that sounds almost sarcastic. Rick’s hostility must be
catching. I hope I don’t come down with it.

Bruce

[From Bruce Gregory (2010.02.05.1152 UT)]

(Gavin Ritz 2010.03.05.16.30NZT)

GR: In terms of PCT I have proposed that at the highest level the reference signals are never zero. However these reference signals are always pain (fear signals) that is negative however how the asymmetry of PCT works is when controlling this perception it comes out as a positive controlling perception as in this case “family–orientation or cooperation”.

I have identified about 7 pain (fear) reference signals that have many perceptions that can be controlled for (and always a non zero reference level). One particular reference signal that is linked to affiliation, cooperation, family orientation, socialization is the pain (fear) reference signal of ostracization and alienation fears obviously these are closely linked to the organism’s survival.

In some organism’s the controlling perception does not have to become “family-orientation or cooperation” it may become the opposite. For example a person might decide to become a hermit that lives in a mountain cave and has nothing to do with other people because other (pain) reference signals are also so called switched on. For example it may be the restriction (pain) signal is also being controlled for and this is perceived as some sort of freedom (so freedom is being controlled for). So this person chooses to live away from people on a mountain in a cave living off hunting wild animals.

I believe that controlling perceptions as related to these pain signals are a “One to many Mapping” so to assume that cooperation is an inbuilt reference level does not sound like a very robust way to look at individuals to me. It’s just one of many perceptions we can control for,

BG: Thanks.

Bruce

Bill – Thank you for your comments. You have indeed
pinpointed some confusion in my thinking. I hadn’t been thinking
much at all about how perceptions came about. I assumed that perceptual systems
were relatively fixed, that reference levels were relatively fixed, so all that
was left was changing behaviors to change the level of what is perceived to
better match the reference level. I see now this is an oversimplification
of PCT.

I have thought some about what you wrote, though I don’t
feel like I have it all clearly in my head yet. Let me surface some of
what I think I am learning, and I would appreciate any comments and corrections
you feel are worthwhile.

I apologize in advance that this is not all put together in a
neat linear bundle that is easy to follow. It is more in the order of my
realizations as I read and re-read your post.

BP: Other children (and parents) don’t see the world in
terms of right and wrong, so the question of how much rightess or wrongness is
wanted doesn’t apply.

a. FL:
So something has to be perceived (right and wrong) before there can be a
reference level set for it (how much rightness and wrongness wanted).

BP: On the other hand, we may be lucky enough to
reorganize and cease to perceive superiority as a human attribute, or as
existing at all. Superiority as an automatic way of perceiving then disappears,
taking with it any reference levels that may have been set for it, as well as
other causal relationships with it.

a. FL:
Again, something has to be perceived (superiority) for reference levels about
it to exist. But also, that reorganization changes how or what we
perceive, not just how much of something we perceive. Behavior changes how much
of something we perceive, but doesn’t affect the perceptual functions
themselves.

BP: What you mean here is perceptions, not reference
signals. Where do our ways of perceiving things come from? Some are
biologically based, some are acquired, some are modifications of
biologically-based ways of perceiving. The reference setting is a different
matter: that determines how much of a given perception is desired. Once
we get our perceptions organized they become semi-permanent as kinds of
perceptions. We may learn that being wealthy is a sign of some kind of
superiority, but that doesn’t mean we will forever want to be superior. We may
find that being superior is no fun – too much is expected of you – and
try to avoid being superior. So we set the reference level for superiority at a
low level.

On the other hand, we may be lucky enough to
reorganize and cease to perceive superiority as a human attribute, or as
existing at all. Superiority as an automatic way of perceiving then disappears,
taking with it any reference levels that may have been set for it, as well as
other causal relationships with it.

a. FL:
Lots going on here:

i.
Perceptions become relatively fixed (as they get organized), but
they don’t necessarily start that way.

ii.
Given fixed perceptions, higher level systems may vary reference
levels of lower systems to do a better job of approaching higher level goals

iii.
Therefore, higher level references are more stable than
lower level references.

iv.
If changing the lower level references doesn’t
sufficiently reduce the error perceived in those higher levels, (or if we are
“lucky”), we may reorganize our lower level perceptual systems

v.
Reorganization mainly affects perception­- i.e.,
perceptual functions. (Actually, this doesn’t seem right –
maybe perceptual functions is just one of many things that reorganization can
affect. Can it also affect those higher level references that seem to be
governing any changes to the lower level references?)

b. FL:
So it might be possible to model a baby as undergoing reorganizations more
rapidly than an adult as it figures out what in its world is important to perceive.

(From original post) FL: Initially, a child
doesn’t know whether hitting another child is right or wrong. But
if the hitting elicits a scolding from mom, the child changes actions until
he/she finds a method of play that does not involve hitting. Eventually,
this alternative style of play becomes the child’s reference own internal
reference signals.

BP: I’d say that right
and wrong become perceptions and that the child learns to control them. Some
children choose a low setting for a perception of rightness, or a high setting
for perception of wrongness (deliciously bad). However, if the child chooses a
low reference level for the things perceived as a state of rightness, or tries
out how it feels to seek wronginess, the consequences may lead to enough
intrinsic error to cause reorganization. Other children (and parents) don’t see
the world in terms of right and wrong, so the question of how much rightess or
wrongness is wanted doesn’t apply.

a. FL:
So, a parent might be modeled as providing a signal the child can
perceive in such a way as to create an error in the child (as a scolding might)
that then starts a reorganization, which proceeds until the child perceives
what the parent wishes the child to perceive (right and wrong, for example)

b. Which,
in turn, could be modeled as the parents conducting tests for the controlled
variable with their children until they can conclude the children are, in fact,
controlling the variable the parents want them to be controlling.

c. One
thing a parent might want a child to perceive is the parent’s emotional
state.

d. A
child might become very attuned to a parent’s emotional state and use
this a substitute for an under-developed perceptual apparatus, though over time
we would expect the perceptions to become more internalized as the child
matures.

e. It
seems then that a parent can influence what is perceived (parent’s
emotional state), as well as the reference level for how much of that
perception is desired (happy, or at least not mad). Ultimately, though, a
high level system in the child chooses the reference level for parent emotional
state depending on the high level goal being pursued. (Choosing a lower
reference level for parent’s emotional state so that the child can set a
higher level for peers’ emotional state or the child’s own internal
emotional state seems to be what adolescence is about)

  1. BP: My advice is to model controlled
    variables first, not reference levels. Decide what is being controlled
    first, then let higher levels of control systems determine how much of it is
    wanted by varying the reference signal. Once you have decided what is to be
    controlled, figure out how it is to be controlled: what actions can affect it,
    and what lower-level perceptions are used as a basis for perceiving it. And
    finally, try to figure out why it is being controlled: what higher-level
    perceptions depend on this perception and similar perceptions at the same
    level.

You can also, less ambitiously, assume a reference level and simply try to
model one system at one level, as I do with tracking tasks. But I suspect that
in your application of PCT, hierarchical relationships will be of interest

a. FL:
Sounds like a plan. Thanks!

I’m sure I ‘m still getting lots of things wrong,
but I hope my wrestling with your words will help me reorganize my thinking enough
that I can begin to move in a direction that will ultimately bring me more
success.

Frank

···

From: Control Systems
Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill
Powers
Sent: Thursday, March 04, 2010 11:12 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: [CSGNET] Creating PCT-based social-economic agents

[From Bill Powers (2010.03.04.0808 MST)]

Frank Lenk (2010.03.03.11.20 CST) –

FL: In my view, the atom of a social-economic analysis is
the family.

I think this helps explain why there is no concept of
“enough” in economics. The concept of enough applies to
families – when are they safe enough? When do my children have enough
opportunity? We can show that in general, kids who ride around in bigger
cars and live in bigger homes are safer, and also get to go to better schools
and have more options for better-paying jobs.

BP: Could it be that this is a recipe for positive (error-increasing) feedback?
Why is there such a concern for safety? Could it be that riding around in big
cars and living in big houses is an invitation to envy and resentment by others
(whether real or only imagined by the well-off) which lead to feeling even less
safe?

FL: It can be then be considered sensible to want more of
these things. If our reference signals are for perfect safety and
infinite opportunity, it may even be reasonable to want a lot more of them.

BP: Yes. It’s the nature of positive feedback that the goal recedes as one
approaches it, so what begins as a slight inclination ends as an all-out act of
desperation. The richer you get, the more you have to lose and the more you
fear losing it. Imagine being given a delicate eggshell beautifully painted and
gilded by Picasso or Matisse, one of a kind and worth a million dollars. Would
you dare walk around with it in your pocket or in your hand? Would you take it
to a bar with you and show it to other people? Would you advertise the place
where you keep it at night? Even hiring a guard would be self-defeating because
the presence of the guard would be a message that there is something highly
valuable here. You’d have to be terribly concerned about cracking the eggshell,
or dropping it, or dropping something else onto it, or having a pet step on it
or bump it off the table, or a child play roughly with it. In the end you might
have to build a secret room for it known only to you, where you can go to look
at the egg once in a while, all by yourself (when people won’t wonder where you
are and go looking for you). And you might hesitate to visit the egg yourself,
for fear of getting nervous and having an accident with it. Everything you do
to protect the eggshell only makes you worry about it more.

FL: Yet, even with these as reference signals, it is not
clear to me why we seem to think that “more stuff” is the surest
way to achieve them. One can imagine a hypothetical alternative society
that has the same references for ends, but not means – choosing instead
to focus on greater community cohesion, breaking up the concentrations of
poverty that create the high crime in the first place, conserving scarce
resources rather than rebuilding on new land what has been abandoned in urban
cores, and investing more adult time rather than adult money in their youth –
and finding that this yielded equally good outcomes.

BP: You may have put your finger on an important non-ideological reason to
avoid radical inequalities in a society: they don’t achieve what they are
supposed to achieve for or against either side. For any kind of goal, the
action you take to reach it could be moving you, or it, in the wrong direction.
If you don’t notice it’s doing that, you’ll think something is disturbing the
controlled variable and try to act even more strongly against the disturbance –
only your own action is the disturbance and is making the error worse.

FL: So, I am interested in where our reference signals come
from. Some are undoubtedly biologically based (which? I will need to come
back to this later). But some reference signals must be at least passed
from parent to child (hence the reason why I say my minimum number of PCT-based
agents is two). Some are also likely passed from the
“society” to the parents, but for now, I am leaving a discussion of
how to model that for another day (such influence might be modeled as arising
from the network of people with whom we interact, for example).

BP: This question keeps coming up, but it involves a false assumption. The
false assumption is that reference signals are set once and for all. They’re
not: they have to be adjustable, continually adjustable, to allow higher
systems to vary them as a means of achieving higher goals.
What you mean here is perceptions, not reference signals. Where do our ways of
perceiving things come from? Some are biologically based, some are acquired,
some are modifications of biologically-based ways of perceiving. The reference
setting is a different matter: that determines how much of a given
perception is desired. Once we get our perceptions organized they become
semi-permanent as kinds of perceptions. We may learn that being wealthy is a
sign of some kind of superiority, but that doesn’t mean we will forever want to
be superior. We may find that being superior is no fun – too much is expected
of you – and try to avoid being superior. So we set the reference level
for superiority at a low level.

On the other hand, we may be lucky enough to reorganize and cease to perceive
superiority as a human attribute, or as existing at all. Superiority as an
automatic way of perceiving then disappears, taking with it any reference
levels that may have been set for it, as well as other causal relationships
with it.

FL: George Herbert Mead has this idea of a “social
self” that intrigues me. I may not be getting this exactly right,
but in essence I believe this means that we learn to be ourselves not from
controlling our actions directly, but from controlling others’ reactions
to our actions. We change our actions until we get the desired reaction from
others.

BP: A lot of the confusion over things like a “social self” are
cleared up if you stop wondering if such things really exist and remember that
they are perceptions. Of course I can perceive a self that is adjusted to have
a certain appearance to others, as best I can judge. When that’s not a problem,
there are other selves which are more simply descriptive, and sometimes there’s
no self at all – just a concern with what’s going on.
And the observer of all these selves is something entirely different from a self.
It’s an Observer. Sometimes, in fact, there is no self until someone asks about
it. They say, “What do you think about Oprah?” The first
reaction may be “Who? Me? Well, I guess I think she’s a pretty good
person, though I don’t know that much about her …” You have to look and
see what thoughts are there, though when asked there was nobody, just then,
thinking them.

A self as a permanent identity is different from a self as a perceptual
construct.

FL: Initially, a child doesn’t know whether hitting
another child is right or wrong. But if the hitting elicits a scolding
from mom, the child changes actions until he/she finds a method of play that
does not involve hitting. Eventually, this alternative style of play becomes
the child’s reference own internal reference signals.

BP: I’d say that right and wrong become perceptions and that the child learns
to control them. Some children choose a low setting for a perception of
rightness, or a high setting for perception of wrongness (deliciously bad).
However, if the child chooses a low reference level for the things perceived as
a state of rightness, or tries out how it feels to seek wronginess, the
consequences may lead to enough intrinsic error to cause reorganization.

Other children (and parents) don’t see the world in terms of right and wrong,
so the question of how much rightess or wrongness is wanted doesn’t apply.

FL: It is as if we have a relatively higher gain for
others’ reactions than for the direct consequences of our actions, at
least for some of them (I suspect that “peer pressure” works at
least somewhat similarly). This was brought home to me in simple way not
long ago. I snore. So I as I fall asleep, I snore a little as I
relax. If I am not quite completely asleep, some part of me is aware that I am
snoring, but left to my own devices, I don’t react to it. My wife,
however, hates my snoring. So as I begin to snore, she moans in response.
THAT I react to and turn over – not my actions, but her
reaction to my actions. And gradually, I find myself more aware of my
snoring and (sometimes) I turn over before she moans.

BP: This is altruism, which puzzles some people. You have a high reference
level for your perception of your wife’s peace and comfort, which is part of
what loving her means. You have progressed beyond trying not to snore just to
keep her from complaining about it, and gone on to the stage of wanting her not
to be disturbed. A cynic might say that you just don’t like moaning, and I
suppose that’s possible, but it seems to me that a person who could believe
that explanation has never loved anyone.

FL: We seem exquisitely tuned to be sensitive to the
reactions of others. Even in the midst of a conversation, I can sense myself
changing what I say and how I say it in response to subtle clues about whether
someone agrees or disagrees with me.

BP: Being sensitive to – perceiving – the reactions of others and learning to
interpret them reasonably well does not say what reactions you want them to be
having. If you see that your actions are annoying someone else, that may be
exactly what you want. Kids do it to teachers quite often. We use the term
“being sensitive to” to imply that we have a kindly, protective
attitude toward the reaction, but that isn’t necessarily or even often the
case. The reference level for the person’s reaction depends on a lot of other
circumstances and your higher-order perceptions.

FL: So my question is how do I begin to model this kind of
thing? I am hoping the “old hand” at PCT modeling could give me
some suggestions. I plan to do my programming initially in the agent-based
modeling framework NetLogo. (For those who are interested, it is
available as a free download here: http://ccl.northwestern.edu/netlogo/.
But this shouldn’t constrain anyone from feeling free to offer me
whatever advice they see fit.

BP: I love being asked to give advice because I’m full of it and as a proponent
of the Method of Levels I’m not supposed to force it on people. Well, you asked
and this isn’t therapy.
My advice is to model controlled variables first, not reference levels. Decide what
is being controlled first, then let higher levels of control systems determine
how much of it is wanted by varying the reference signal. Once you have decided
what is to be controlled, figure out how it is to be controlled: what actions
can affect it, and what lower-level perceptions are used as a basis for
perceiving it. And finally, try to figure out why it is being controlled: what
higher-level perceptions depend on this perception and similar perceptions at
the same level.

You can also, less ambitiously, assume a reference level and simply try to
model one system at one level, as I do with tracking tasks. But I suspect that
in your application of PCT, hierarchical relationships will be of interest.

Best,

Bill P.

[From Bill Powers (2010.03.05.0725 MST)]

Martin Taylor 2010.03.04.17.18 –

MT: I’d argue against using a
platform-dependent language … At the moment, I’m looking into the
freeware OpenLaszlo environment for developing control experiments and
simulations.

BP: I’ve been looking for a platform-independent language as long as PCT
has been in existence. But it also has to have a few other
characteristics, such as being easy to learn, easy to understand after a
program has been written, and flexible as a modeling language. Openlazlo
has only the first characteristic. The sample programs are incredibly
verbose, with the terms that are meaningful for the task at hand buried
in the details of a text-oriented and object-oriented language. Maybe I
just have to get used to it.

The last, the object-oriented aspect, is the worst part of it from my
point of view. I’m sure this is just a preference of mine, but I have a
problem with thinking of “objects” that are defined not only by
their properties, but by “methods” that define how those
properties are to be used. Look at this example, which uses buttons in
one window to move a second window left and right.

···

============================================================================

<canvas width="100%" height="200">
  <window x="100" y="60" title="Window
2" name="windowTwo" id="windowTwoId">
    <!-- Moves the second window twenty pixels          in specified direction
-->
    <method name="moveWindow"
args="direction">
      // decide which direction to go
      if (direction == "left") {
       var increment = -20;
      } else if (direction == "right")
{
        var increment = 20;
      }
      var originalX = this.x;
      var newX = originalX + increment;
      this.setAttribute('x', newX);
    </method>
    <text>This is the second window.</text>
  </window>
  <window x="20" y="20" width="210"
title="Simple Window">
    <simplelayout axis="x"
spacing="4"/>
    <button text="Move Left"
name="button1"
onclick="windowTwoId.moveWindow('left')"/>
    <button text="Move Right"
onclick="windowTwoId.moveWindow('right')"/>
  </window>
</canvas>

==============================================================The
code for window two defines a method called “moveWindow”. This
method is evoked by the string

windowTwoId.moveWindow('left')

When that command is given, windowTwo obligingly moves itself left
by 20 units.

In Delphi, the code wouldn’t be a lot different:

with window[2] do

Left := Left - 20

Where “Left” means the location of the left side of any window,
[2] specifies which window in a list of windows. If you wanted to move
window three to the left, you would just write

with window[3] do

Left := Left - 20

To make it possible to move windowThree left in OpenLazlo, you’d have to
change the code that defines windowThree to add the method that moves it
left, and then write, in the part of the program that wants windowThree
moved left,

windowThreeId.moveWindow('left')

Imagine that you want to use an object like a one-foot ruler to
prop a window open. To manufacture an object-oriented ruler that could be
used for that purpose, you would have to build into it a method by which
the ruler can prop a window open, to go along with the method by by which
the ruler can measure the lengths of things laid alongside it. Then, to
prop a window open, you would have to tell the ruler,
“ruler.propwindowopen.” It would then proceed to prop the
window open.

That’s a weird way to think about objects, and all just to get around the
idea that we are agents who use objects for our own purposes, not
purposes built into the objects – the “affordances” Gibson
thought objects had in them.

But I’ll give OpenLazlo a try, anyway.

Best,

Bill P.

[From Bill Powers (2010.03.05.0850 MST)]

Bruce Gregory (2010.03.05.0140 UT) --

BG: What does it mean to "learn" or "decide"? I thought reference levels were either inborn or the result of reorganization.

You're right, for "learn" I should have said "reorganize, memorize, or use a known algorithm" to be technically correct. In ordinary language, however, where the meanings are not the issue, I too say learn or decide.

Reference levels are not inborn; if they were, how would higher-level control systems act to alter their own perceptions? Imagine being born with an unchangeable reference level for the angle between forearm and upper arm, or where in a room you want to be, or how much to withdraw from your bank account.

Is there a PCT model of learning that does not involve reorganization? Is there a PCT model of deciding? Since you put so much faith in models, I think we should stick with the model.

Yes, in both cases. "Learning" in common language can mean reorganizing, memorizing, or reasoning out the answer to a problem. Each of those processes can be done in the PCT model, but not by the same underlying mechanism. "Deciding" can occur by algorithm (take the piece of cake closest to you) or by reorganization. In the former case the outcome is determined by present-time data and the algorithm; in the latter, by randomly trying different choices until one is found that reduces error.

Or have I missed something very important? The only question to be answered would seem to me to be, is this reference level inborn or does it arise from reorganization? Answering that question would not be easy. In fact, I suspect it would prove impossible in practice. (As someone observed, possible in principle usually means impossible in practice.)

You're thinking of reference levels as things that get set once and for all. You have missed something very important. That's not how the HPCT hierarchy works. Reference levels, with the obvious exception, are adjusted by variable signals coming from the output functions of higher-order control systems, and are the means the higher system uses to control its own perceptions. At the highest level, reference levels must vary much more slowly because there is no higher system using them as a means of control. Reorganization must be the main source of variation at the top level.

BG: I suggest that Frank finesse the question and simply assume the agents are controlling for "cooperating with others." After all, rational expectation theorists don't spend a great deal of time worrying about the reasons that agents are rational.

That's what is wrong with rational expectation theory, isn't it? And assuming that cooperation is a universal inborn controlled variable doesn't explain competition, or crime, or ignoring what other people do. Those things happen, too.

BG: Sorry to spoil Rick's day.

BP: Are you? That's decent of you.

BG: Gee, that sounds almost sarcastic. Rick's hostility must be catching. I hope I don't come down with it.

Don't worry. You're probably immune to it.

Best,

Bill P.

[From Bruce Gregory (2010.02.05.1701 UT)]

[From Bill Powers (2010.03.05.0850 MST)]

You’re thinking of reference levels as things that get set once and for all. You have missed something very important. That’s not how the HPCT hierarchy works. Reference levels, with the obvious exception, are adjusted by variable signals coming from the output functions of higher-order control systems, and are the means the higher system uses to control its own perceptions. At the highest level, reference levels must vary much more slowly because there is no higher system using them as a means of control. Reorganization must be the main source of variation at the top level.

BG: I’m not sure that was my problem. I think my problem arose from understanding the nature of the hierarchy. If I adopt Gavin’s suggestion that the highest level involves avoiding pain I find the hierarchy makes more sense to me. In this case, the reference levels do indeed vary very slowly and reorganization is plausibly the main source of variation. Clearly there are pathological conditions where the individual seeks out pain, but in general we do our best to avoid it.

BG: I suggest that Frank finesse the question and simply assume the agents are controlling for “cooperating with others.” After all, rational expectation theorists don’t spend a great deal of time worrying about the reasons that agents are rational.

That’s what is wrong with rational expectation theory, isn’t it? And assuming that cooperation is a universal inborn controlled variable doesn’t explain competition, or crime, or ignoring what other people do. Those things happen, too.

BG: Agreed.

BG: Sorry to spoil Rick’s day.

BP: Are you? That’s decent of you.

BG: Gee, that sounds almost sarcastic. Rick’s hostility must be catching. I hope I don’t come down with it.

Don’t worry. You’re probably immune to it.

BG: That’s a relief!

Bruce

[From Frank Lenk (2010.03.05.08:00 CST)]

Sorry – forgot the time stamp on my last post.

Bill – Try looking at the Segregation model in the Social
Science section of the models library. To me, this looks like a set of agents
with a simple control system. Agents have a perception of their own color
and those of their neighbors. They have a reference level of for the
minimum number of neighboring agents of a different color that they can tolerate.
They have a set of behaviors – forms of movement – that they enact
to change their perception of their neighborhood. They continue that
behavior until their perceptions match their reference level. Though the
system looks like it is in stationary state at the end, in reality, the agents
are continuing to evaluate their perception vs. their reference level. This can
be shown by changing the reference level and seeing the agents start to move
again.

How would you change this simulation to be more consistent with
PCT? This will help highlight where you and I are still thinking
differently.

In general, I suspect that Basic and Pascal don’t have
functions for “agent-wants” or “agent-perceives”
either. The reference levels and perceptual functions have to be programmed.
It is no different with NetLogo, as far as I can tell. It just provides a
nice agent framework and relatively simple language that makes it appropriate
for prototyping – at least I think so.

On to your criticism of my philosophical approach, or rather,
that I am taking one BEFORE doing the modeling to prove it.

FL: My vantage point is different. I believe humans are
inherently social.

BP: Then that belief is the first thing you have to set aside. You can’t show
with a model or in any other way that your belief is true if you simply assume
it’s true before you start constructing your model. All you can do that way is
to explore the implications that would follow IF the belief were true. You
still don’t know if it’s true, so you don’t know if the deduced implications
have any truth either. You’ll be wasting your time.

FL: In my view, the atom of a social-economic analysis
is the family. I think that most of what we call economic behavior is really
about either finding someone to start a family with or satisfying the needs of
our families once we have them. In PCT terms, the economy is (part of the)
behavior that results from attempts to control our perceptions to our
references for family. (This may also apply to the underground economy, if
books like The Godfather are to be believed).

BP: Well, now you’ve told us what you intend to prove if you can, and I for one
don’t doubt that you’ll find a way to do it. It’s easy to prove anything if you
have control of the model and can make it do anything you want, and can pick
out only supporting evidence from the literature.

What’s hard is NOT assuming the truth of any particular conclusion, and simply
trying to make the model’s properties be as close to reality as you can, all
your beliefs about the ultimate outcome being ignored and put out of mind to
the extent humanly possible. That way is hard, but the outcome is far harder to
ignore or deny. If you do what you say above that you intend, you will come up
with a model demonstrating what you mean by family-oriented behavior. Your
conclusions from the behavior of the model will convince all those who already
believe that human beings are inherently family-oriented, and none of those who
don’t – even those who don’t think it is but are willing to be persuaded if
you could defend your reasoning about the premises and the logic of your
deductions. Of course nothing will convince those who are not willing to be
persuaded, but it’s only the others who are worth convincing anyway. You’d
better not come on as begging the question with them.

In general, I find myself agreeing with you. One of my
criticisms of most of the agent-based economics models I have seen is that they
assume their conclusions, that they are toy models with no grounding in
reality.

So maybe I made a mistake in betraying my biases to you –
though I think we all have them and admitting them openly is the only way to lessen
their impact on research. You now know what they are, and can help prevent
me from making unwarranted assumptions.

Let me try again then. I observe the existence of families.
I observe that nearly everyone comes from a family, if not now then at some
point in their past. I don’t observe families operating in total
isolation. I don’t observe individuals operating in total
isolation. I want to build a model that explains how families survive and
reproduce over generations. I hypothesize that this requires members of the
family setting reference levels for and developing perceptions of family health
and safety. I hypothesize that one family cannot do everything necessary to
assure their survival. I hypothesize that some kind of cooperation among
families is required, or at least beneficial. I hypothesize that such
cooperation will be applied to the realm of satisfying material needs. I define
that realm as “the economy.” I hypothesize that even within
an overall cooperative scheme, there will still be competition and conflict.
I observe and then specify a certain level of technology and a certain legal
and institutional framework. I hypothesize that divisions of labor, differences
in roles, differences in status will develop. I hypothesize that on certain key
measures (distribution of income, for example), the simulated economy will
function in a manner similar to the real economy.

At the center of all those hypotheses is another one –
that PCT-based agents are what make all of those hypotheses plausible. As
the agents control their perceptions by changing behaviors, references, and
even their perceptual functions themselves, they will discover the kinds of
cooperation and competition that make sense for their artificial world. My
ultimate hypothesis, then, is that if I populate a sufficiently realistic artificial
world with sufficiently realistic agents, I should get sufficiently realistic
behavior for the model to be useful to understanding the real, not just
artificial world.

Is this any better?

Frank

···

From: Control Systems
Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill
Powers
Sent: Thursday, March 04, 2010 12:49 PM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: [CSGNET] Creating PCT-based social-economic agents

[From Bill Powers (2010.03.04.1035 MST)]
Frank Lenk (2010.03.03.11.20 CST)
Frank, I’ve downloaded NetLogo and can’t see (yet) how it can be used to construct
a control-system model.
In the programming guide, an Agent is defined this way:
"The NetLogo world is made up of agents. Agents are beings that can follow
instructions. Each agent can carry out its own activity, all simultaneously.
"
Under Procedures we have:

In NetLogo, commands and reporters tell agents what to do. A command is
an action for an agent to carry out. A reporter computes a result and
report it.
Most commands begin with verbs (“create”, “die”,
“jump”, “inspect”, “clear”), while most reporters
are nouns or noun phrases.
Commands and reporters built into NetLogo are called primitives. The
NetLogo Dictionary has a complete list of built-in commands and reporters.

Here are the “turtle-related” primitives from the dictionary:
back
(
bk)
-at

-here

-on
can-move?
clear-turtles
(
ct) create-

create-ordered-

create-ordered-turtles
(
cro) create-turtles
(
crt) die
distance
distancexy
downhill
downhill4
dx
dy
face
facexy
forward
(
fd) hatch
hatch-

hide-turtle
(
ht) home
inspect
is-
?
is-turtle?
jump
layout-circle
left
(
lt) move-to
myself
nobody
no-turtles
of
other
patch-ahead
patch-at
patch-at-heading-and-distance
patch-here
patch-left-and-ahead
patch-right-and-ahead
pen-down
( pd)
pen-erase
( pe)
pen-up
( pu)
random-xcor
random-ycor
right
( rt)
self
set-default-shape
__set-line-thickness
setxy
shapes
show-turtle
(
st) sprout
sprout-

stamp
stamp-erase
subject
subtract-headings
tie
towards
towardsxy
turtle
turtle-set
turtles
turtles-at
turtles-here
turtles-on
turtles-own
untie
uphill
uphill4

Note that there is no primitive called “turtle-perceives” or
“turtle-wants”.

Here is the primitive called “distancexy”:

====================================================================

distancexy xcor ycor

Turtle Command Patch Command

Reports the distance from this
agent to the point (xcor, ycor).

The distance from a patch is measured from the center of the patch. Turtles and
patches use the wrapped distance (around the edges of the world) if wrapping is
allowed by the topology and the wrapped distance is shorter.

if (distancexy 0 0) > 10

[ set color green ]

;; all turtles more than 10 units from

;; the center of the world turn green.


===============================================================
This function reports the distance not to the turtle but to the program that
runs everything. It would be very awkward to make this distance variable into a
perception belonging to just one turtle, then compare this distance to a
reference distance and output a command to move in some specific direction that
depends on the error. I suppose it could be done, but getting a PCT model to
run would be a continual process of trying to fool NetLogo into doing something
it wasn't designed to do.

Here is a program example for making the turtle move to a "patch"
that has the lowest value for some "patch-variable."

============================================================

move-to patch-here ;; go to patch center

let p min-one-of neighbors [patch-variable]  ;; or neighbors4

if [patch-variable] of p < patch-variable [

face p

move-to p

===================================================================


If the test comes out true, two commands are issued: face a
particular patch and move to that patch. You might get somewhere by
interpreting patches to mean states of mind or attitudes or something like
that, but the metaphors in NetLogo are all biased toward a particular view of
what behavior is, and it's not the PCT view as far as I have seen.

FL: ... The minimum number agents I need is two. 
 
There are lots of “Robinson Crusoe” models that, to focus on what
is essential about the human condition and economic problems in general, create
as their atom of analysis an isolated individual on a deserted island.
 
My vantage point is different. I believe humans are inherently social.

BP: Then that belief is the first thing you have to set aside. You can't show
with a model or in any other way that your belief is true if you simply assume
it's true before you start constructing your model. All you can do that way is
to explore the implications that would follow IF the belief were true. You
still don't know if it's true, so you don't know if the deduced implications
have any truth either. You'll be wasting your time.

If you look at the basic diagram for a PCT control system, you will find that
there is no function or signal called "control" in it.

FL: We don’t exist without others – in fact, we
can’t (at least until the advent of human cloning). Even Adam Smith
thought that the defining characteristic of humans was their capacity for what
he called “Sympathy” for others, though today we would translate
this more as “empathy”.  (See Smith’s *Theory of Moral
Sentiments* for more, a book he thought was more important than his *Wealth
of Nations*.)
 
 In my view, the atom of a social-economic analysis is the family. I think
that most of what we call economic behavior is really about either finding
someone to start a family with or satisfying the needs of our families once we
have them. In PCT terms, the economy is (part of the) behavior that results
from attempts to control our perceptions to our references for family. (This
may also apply to the underground economy, if books like *The Godfather*
are to be believed).

BP: Well, now you've told us what you intend to prove if you can, and I for one
don't doubt that you'll find a way to do it. It's easy to prove anything if you
have control of the model and can make it do anything you want, and can pick
out only supporting evidence from the literature.

What's hard is NOT assuming the truth of any particular conclusion, and simply
trying to make the model's properties be as close to reality as you can, all
your beliefs about the ultimate outcome being ignored and put out of mind to
the extent humanly possible. That way is hard, but the outcome is far harder to
ignore or deny. If you do what you say above that you intend, you will come up
with a model demonstrating what you mean by family-oriented behavior. Your
conclusions from the behavior of the model will convince all those who already
believe that human beings are inherently family-oriented, and none of those who
don't -- even those who don't think it is but are willing to be persuaded if
you could defend your reasoning about the premises and the logic of your
deductions. Of course nothing will convince those who are not willing to be
persuaded, but it's only the others who are worth convincing anyway. You'd
better not come on as begging the question with them.

In summary: I think some other language would suit your modeling efforts a lot
better than NetLogo would. You won't get all the whistles and bells, but you'll
have a lot more freedom to think your own way without being forced into the
channels provided by the designers of the language. I suggest Basic or Pascal.
You can go on from there later with Visual Basic or Delphi (more whistles and
bells).

And it would be best to put what you know to be true into the model, and hope
that when it runs, it will support what you believe it should do.

Best,

Bill P.

</details>

[Martin Taylor 2010.03.05.12.27]

[From Bruce Gregory (2010.02.05.1701 UT)]

Sorry if I’m butting in.

BG: I’m not sure that was my problem. I think my problem arose
from understanding the nature of the hierarchy. If I adopt Gavin’s
suggestion that the highest level involves avoiding pain I find the
hierarchy makes more sense to me. In this case, the reference levels do
indeed vary very slowly and reorganization is plausibly the main source
of variation. Clearly there are pathological conditions where the
individual seeks out pain, but in general we do our best to avoid it.

Maybe I’m wrong, but it sounds as though you are missing something else
– the role of intrinsic variables. For me, that’s where the concept of
“pain” is likely to matter. “Pain” would be a percept that occurs
somewhere in the normal (Perceptual control) hierarchy when some
intrinsic variable has a large enough amount of error. Error in some
part of the intrinsic variable system leads to reorganization in the
perceptual control hierarchy – at least that’s the “classic” HPCT
approach. Error in the perceptual control hierarchy is not equivalent
to pain, though it may be accompanied by the perception of pain.

Error in some element of the perceptual control hierarchy does not lead
to reorganization, even at the top level of the hierarchy; it leads to
output that influences the perceptual variable in the direction of
reducing error. In discussions of HPCT, it is, however, often assumed
that persistent and especially increasing error in the perceptual
control hierarchy is itself an intrinsic variable with an inborn
reference level of zero.

Intrinsic variables are those with genetically determined reference
levels, and whose values are not directly perceptible nor directly
affected by the normal outputs of the perceptual hierarchy (typically,
those outputs are muscular). You perceive hunger, but not low blood
sugar or stomach volume (though you may perceive hunger because the
time of day is near lunchtime, despite having high blood sugar and a
normal sized stomach). You control hunger by eating, and a side effect
of that control is a change in your blood sugar. Most influences on
intrinsic variables are side-effects of perceptual control, since most
(perhaps all) intrinsic variable values cannot be directly perceived.

Diagrammatically (taken from
the
“intrinsic variable control loop” might look like this, where each blob
and each arrow probably has many, many, components. (The lines
connecting the perceptual control hierarchy to the environment
represent both inputs and outputs).
A new top level of the perceptual control hierarchy can be grown by
reorganization as easily as a new connection can be made or an old one
broken. If the new top-level control system’s actions serve to reduce
intrinsic error, the new organization is likely to survive. If those
actions tend to increase error among the intrinsic variables, the new
level is unlikely to survive. So far as I understand it, that’s the
mechanism whereby infants increase their number of hierarchic levels of
control as they mature, though, as with the growth of arms and legs in
the fetus, I would expect there to be a genetic component directing how
the levels are likely to develop.
To get back to “pain”, I see “pain” as a perception, but it’s not a
perception of error in a controlled perceptual variable. It’s a
surrogate perception that corresponds to error in some part of the
intrinsic variable system; to reduce pain requires control of a
perception that is not itself pain – the relevant output action could
be the removal of a pin from one’s arm or the acquisition of a new
lover to replace the one that just gave you the push. The reference
value for “pins in the arm” is likely to be zero and that for having a
current lover is likely to be positive, but the perception that there
is a pin in the arm or that there is no current lover is not itself the
perception of pain.
I hope that makes some kind of sense.
Martin

moz-screenshot.png

···

http://www.mmtaylor.net/PCT/Mutuality/intrinsic.html

[From Bruce Gregory (2010.02.05.1841 UT)]

[Martin Taylor 2010.03.05.12.27]

[From Bruce Gregory (2010.02.05.1701 UT)]

Sorry if I’m butting in.

BG: I’m not sure that was my problem. I think my problem arose
from understanding the nature of the hierarchy. If I adopt Gavin’s
suggestion that the highest level involves avoiding pain I find the
hierarchy makes more sense to me. In this case, the reference levels do
indeed vary very slowly and reorganization is plausibly the main source
of variation. Clearly there are pathological conditions where the
individual seeks out pain, but in general we do our best to avoid it.

Maybe I’m wrong, but it sounds as though you are missing something else
– the role of intrinsic variables. For me, that’s where the concept of
“pain” is likely to matter. “Pain” would be a percept that occurs
somewhere in the normal (Perceptual control) hierarchy when some
intrinsic variable has a large enough amount of error. Error in some
part of the intrinsic variable system leads to reorganization in the
perceptual control hierarchy – at least that’s the “classic” HPCT
approach. Error in the perceptual control hierarchy is not equivalent
to pain, though it may be accompanied by the perception of pain.

BG: You are correct, I was not thinking explicitly in terms of intrinsic variables, perhaps because they seem to receive little attention in HPCT. I was not thinking in terms of pain being associated with error in the perceptual control hierarchy, but rather pain as a perception that is controlled indirectly by the hierarchy.

Error in some element of the perceptual control hierarchy does not lead
to reorganization, even at the top level of the hierarchy; it leads to
output that influences the perceptual variable in the direction of
reducing error. In discussions of HPCT, it is, however, often assumed
that persistent and especially increasing error in the perceptual
control hierarchy is itself an intrinsic variable with an inborn
reference level of zero.

BG: O.K.

Intrinsic variables are those with genetically determined reference
levels, and whose values are not directly perceptible nor directly
affected by the normal outputs of the perceptual hierarchy (typically,
those outputs are muscular). You perceive hunger, but not low blood
sugar or stomach volume (though you may perceive hunger because the
time of day is near lunchtime, despite having high blood sugar and a
normal sized stomach). You control hunger by eating, and a side effect
of that control is a change in your blood sugar. Most influences on
intrinsic variables are side-effects of perceptual control, since most
(perhaps all) intrinsic variable values cannot be directly perceived.

BG: O.K.

Diagrammatically (taken from
the
“intrinsic variable control loop” might look like this, where each blob
and each arrow probably has many, many, components. (The lines
connecting the perceptual control hierarchy to the environment
represent both inputs and outputs).
A new top level of the perceptual control hierarchy can be grown by
reorganization as easily as a new connection can be made or an old one
broken. If the new top-level control system’s actions serve to reduce
intrinsic error, the new organization is likely to survive. If those
actions tend to increase error among the intrinsic variables, the new
level is unlikely to survive. So far as I understand it, that’s the
mechanism whereby infants increase their number of hierarchic levels of
control as they mature, though, as with the growth of arms and legs in
the fetus, I would expect there to be a genetic component directing how
the levels are likely to develop.
To get back to “pain”, I see “pain” as a perception, but it’s not a
perception of error in a controlled perceptual variable. It’s a
surrogate perception that corresponds to error in some part of the
intrinsic variable system; to reduce pain requires control of a
perception that is not itself pain – the relevant output action could
be the removal of a pin from one’s arm or the acquisition of a new
lover to replace the one that just gave you the push. The reference
value for “pins in the arm” is likely to be zero and that for having a
current lover is likely to be positive, but the perception that there
is a pin in the arm or that there is no current lover is not itself the
perception of pain.

BG:Yes, that is my understanding as well. Putting the question in terms of intrinsic variables helps. It avoids what I think of as the hierarchy problem (what is the purpose of the perceptual hierarchy?). The hierarchy exists to indirectly control error in the system of intrinsic variables. Thanks.

Bruce

···

http://www.mmtaylor.net/PCT/Mutuality/intrinsic.html

<moz-screenshot.png>

[From Bill Powers (2010.03.05.1258 MST)]

Martin Taylor 2010.03.05.12.27 –

[From Bruce Gregory
(2010.02.05.1701 UT)]

MT: Sorry if I’m butting
in.

BG earlier: I’m not sure that
was my problem. I think my problem arose from understanding the nature of
the hierarchy. If I adopt Gavin’s suggestion that the highest level
involves avoiding pain I find the hierarchy makes more sense to me. In
this case, the reference levels do indeed vary very slowly and
reorganization is plausibly the main source of variation. Clearly there
are pathological conditions where the individual seeks out pain, but in
general we do our best to avoid it.

MT: Maybe I’m wrong, but it sounds as though you are missing something
else – the role of intrinsic variables. For me, that’s where the concept
of “pain” is likely to matter.

Thanks, Martin, for a remarkably lucid discussion of reorganization
theory.
I can add only one point to it. I don’t see the reorganizing system, or
function, or whatever we call it, as the highest level in the hierarchy,
for several reasons. The main reason is that it has to be present and
functioning from birth (and maybe before) and is, in theory, responsible
for all added organization that appears during maturation. In fact that’s
more or less an operational definition of what we mean by reorganizing
system: the set of all functions that can have this effect.
Another reason is that reorganization can always work on any level in the
hierarchy, not just the top level.
Another is that the controlled variables relating to reorganization are
not proposed to be among the learned perceptions (i.e., products of
reorganization), nor are they functions of those perceptions. The
variables controlled by the reorganizing system are intrinsic to the
organism
, meaning inborn rather than acquired. I’m speaking here of
the theory; I don’t know that this is borne out by observation. Martin
said all this very clearly.
The final reason for not saying that the reorganizing system is the top
level of the hierarchy is that it is not hypothesized to act by varying
the reference signals for lower-level control systems. Instead, it acts
directly on the parameters of neural networks and on the connectivity of
the nervous system at any level. Neuroscientists are discovering that the
brain is far more “plastic” – modifiable – at every level
than was thought to be the case 50 years ago. In B:CP I quoted Jerzy
Konorski, a well-known neurologist, as saying that plasticity extended
even to the lowest, spinal-level, systems. At that time it was thought
that the higher systems could not grow new connections or new neurons;
now those ideas are all being reevaluated. The glial cells may have a
great deal to do with reorganization; it’s possible, though not proven,
that they may be the reorganizing system. They outnumber CNS
neurons by a factor of somewhere between 10 and 50. They could be the
scaffolding on which the hierarchy of control is built.

The only way reorganization could affect reference levels for the top
level of system is to alter the internal structure of input, comparison,
or output functions. Changing the threshold of detection of an input
function or a comparator neuron or an output function could introduce an
offset or bias into the input-output path, so that a nonzero level of
input would be required to bring the effect of the output on the input to
zero (the fundamental definition of a reference level). This would
explain why it’s so hard to change the highest-level goals,
system-concept goals if I have guessed right.

Best,

Bill P.

[From Martin Lewitt (2010.03.05.1431 MST)]

[From Bruce Gregory (2010.02.05.1701 UT)]

[From Bill Powers (2010.03.05.0850 MST)]

You’re thinking of reference levels as things that get set
once and for all. You have missed something very important. That’s not
how the HPCT hierarchy works. Reference levels, with the obvious
exception, are adjusted by variable signals coming from the output
functions of higher-order control systems, and are the means the higher
system uses to control its own perceptions. At the highest level,
reference levels must vary much more slowly because there is no higher
system using them as a means of control. Reorganization must be the
main source of variation at the top level.

BG: I’m not sure that was my problem. I think my problem arose
from understanding the nature of the hierarchy. If I adopt Gavin’s
suggestion that the highest level involves avoiding pain I find the
hierarchy makes more sense to me. In this case, the reference levels do
indeed vary very slowly and reorganization is plausibly the main source
of variation. Clearly there are pathological conditions where the
individual seeks out pain, but in general we do our best to avoid it.

BG: I suggest that Frank finesse the
question and simply assume the agents are controlling for “cooperating
with others.” After all, rational expectation theorists don’t spend a
great deal of time worrying about the reasons that agents are rational.

That’s what is wrong with rational expectation theory, isn’t it? And
assuming that cooperation is a universal inborn controlled variable
doesn’t explain competition, or crime, or ignoring what other people
do. Those things happen, too.

BG: Agreed.

Disagree. Rational expectation theory is more robust and flexible that
is being assumed here, because it can be paired with the subjective
theory of value. Crime is easily explained by agents who value risk
and immediate gratification so much they don’t consider the long term
consequences. Or perhaps, as my police officer cousin-in-law says, “We
only catch the dumb ones”, less risk allows criminal behavor to be
accomodated even more easily. Competition can occur in response to
price signals without any specific intent to compete or awareness of
competitors, the capabilities of the competition might already be
reflected in the price signals. Of course, one can subjectively value
competing as well. The are very few behaviors that subjective values
can’t be hypothesized as an explanation. That is why I think that if
PCT is to offer better models, it must be by incorporation of data from
the actual subjects being modeled. PCT can possibly be hacked into
existing models as a replacement module for the subjective values, and
achieve every benefit of PCT while still working with the rational
expectation module. The subjective theory with rational expectation
is so robust and flexible that might be unfalsifiable.

Martin L

···

BG: Sorry to spoil Rick’s day.

BP: Are you? That’s decent of you.

BG: Gee, that sounds almost sarcastic.
Rick’s hostility must be catching. I hope I don’t come down with it.

Don’t worry. You’re probably immune to it.

BG: That’s a relief!

Bruce

[Martin Taylor 2010.03.05.14.54]

[From Bill Powers (2010.03.05.0725 MST)]

Martin Taylor 2010.03.04.17.18 –

MT: I’d argue against
using a
platform-dependent language … At the moment, I’m looking into the
freeware OpenLaszlo environment for developing control experiments and
simulations.

BP: I’ve been looking for a platform-independent language as long as
PCT
has been in existence. But it also has to have a few other
characteristics, such as being easy to learn, easy to understand after
a
program has been written, and flexible as a modeling language.
Openlazlo
has only the first characteristic. The sample programs are incredibly
verbose, with the terms that are meaningful for the task at hand buried
in the details of a text-oriented and object-oriented language. Maybe I
just have to get used to it.

I had the same impression. It’s a bit off-putting. But I don’t think
it’s as bad as you make it out to be.

The last, the object-oriented aspect, is the worst part of it from my
point of view. I’m sure this is just a preference of mine, but I have a
problem with thinking of “objects” that are defined not only by
their properties, but by “methods” that define how those
properties are to be used.

“That’s not a bug, but a feature”. It actually is a huge advantage when
things get big and complex. You mkae sure that the little object does
the right thing when it’s asked, and everything that happens is
encapsulated within the object. So if you want something moved left,
you can write your code to do that without worrying whether what is
being moved is a window or a cursor, provided it understands how to
move itself left.

Look at this example, which uses buttons in
one window to move a second window left and right.

============================================================================

<canvas width="100%" height="200">
  <window x="100" y="60" title="Window
2" name="windowTwo" id="windowTwoId">
    <!-- Moves the second window twenty pixels          in specified direction
-->
    <method name="moveWindow"
args="direction">
      // decide which direction to go
      if (direction == "left") {
       var increment = -20;
      } else if (direction == "right")
{
        var increment = 20;
      }
      var originalX = this.x;
      var newX = originalX + increment;
      this.setAttribute('x', newX);
    </method>
    <text>This is the second window.</text>
  </window>
  <window x="20" y="20" width="210"
title="Simple Window">
    <simplelayout axis="x"
spacing="4"/>
    <button text="Move Left"
name="button1"
onclick="windowTwoId.moveWindow('left')"/>
    <button text="Move Right"
onclick="windowTwoId.moveWindow('right')"/>
  </window>
</canvas>

==============================================================The
code
for window two defines a method called “moveWindow”. This
method is evoked by the string

windowTwoId.moveWindow('left')

When that command is given, windowTwo obligingly moves itself left
by 20 units.

In Delphi, the code wouldn’t be a lot different:

with window[2] do

Left := Left - 20

Where “Left” means the location of the left side of any window,
[2] specifies which window in a list of windows. If you wanted to move
window three to the left, you would just write

with window[3] do

Left := Left - 20

To make it possible to move windowThree left in OpenLazlo, you’d have
to
change the code that defines windowThree to add the method that moves
it
left,

No you wouldn’t. You would have a class “MyWindow” that defines the
method “moveWindow”. You would instantiate each of your N windows with
either an LZX tag or a javascript “new MyWindow” that set the initial
location and gave it a name such as 'Window31". Then if you wanted
window 22 to move left, you would write "Window22.moveWindow(‘left’).

Then if you have a new construct, say a bludger, you would create a
class “MyBludger” and write a “moveBludger” method, which would be
identical to the moveWindow method (I don’t know LZX well enough, but
in Ruby you would make a moveMe method that could be used by any
object). Then your code that was moving windows could move bludgers
without a rewrite, if you parameterized it to allow for a
“whichObjectToMove” variable.

I think that the point of object orientation is to separate structure
and operation. It costs complexity when you have a simple small
problem, but gains in comprehensibility and reliability when things get
bigger and more complicated.

Imagine that you want to use an object like a one-foot
ruler to
prop a window open. To manufacture an object-oriented ruler that could
be
used for that purpose, you would have to build into it a method by
which
the ruler can prop a window open, to go along with the method by by
which
the ruler can measure the lengths of things laid alongside it. Then, to
prop a window open, you would have to tell the ruler,
“ruler.propwindowopen.” It would then proceed to prop the
window open.

You might do that, or you might have the ruler tell the window
“window22.open(measure)”.

That’s a weird way to think about objects, and all just to get around
the
idea that we are agents who use objects for our own purposes, not
purposes built into the objects – the “affordances” Gibson
thought objects had in them.

But I’ll give OpenLazlo a try, anyway.

It’s not Gibsonian affordances, I think. It’s the affordances we were
talking about a few weeks ago, the possible environmental feedback
paths available to influence a perception. Imagine that you describe an
environment as a large set of objects that mutually influence one
another, and that you can influence (which is rather like the
environment in which we live). You don’t know how the object you
“move(left)” will influence other objects, or how it will influence the
perception you are controlling through all the mutual influences that
environmental objects have on one another (assuming you are not
actually controlling for that object’s position). The objects do their
thing, and you observe the results. Object-oriented programming is like
that.

I’ll be interested in how you get on with OpenLaszlo, and particularly
whether one can get smooth fast interaction between a slider and a
variable the slider influences (essentially the code one would need to
operate a cursor). Example 17.7 on page
seems a bit jerky if you move the slider even moderately quickly. We
don’t want that in a control experiment.
There’s another free open-source possibility I came across yesterday,
called “breve” . I know nothing
about it other than that at least one of its interactive demos runs
very smoothly, and another (evolutionary) one has a bug that kills it
shortly after it starts! Here’s a description from the introductory
documentation: --------------------------
Martin

···

http://www.openlaszlo.org/lps4.7/docs/developers/layout-and-design.html

http://www.spiderland.org/breve

What is breve?

breve is a free simulation environment designed for multi-agent
simulation. breve allows users to define the behaviors of autonomous
agents in a continuous 3D world, then observe how they interact. breve
includes support for a rich visualization engine, realistic physical
simulation and an easy-to-use scripting language.

Starting in breve 2.6, breve simulations can be written in breve’s
custom “steve”
language, or with the popular open-source Python language. Both
of these languages, along with their strengths and weaknesses are
described later in this documentation.

breve can be used as a tool to explore any type of simulated world.
breve has been used for a wide variety of simulation applications:
simulated virtual creatures, artificial ecosystems, simulations of
molecular biology, visualization and much more. breve facilitates the
construction of complex agent-based simulations by automatically
handling agent communication, representation in 3D space, graphical
rendering, physical simulation and a number of other features which are
useful to agent-based simulations.


[From Bill Powers (2010.03.05.1355 MST)]

Frank Lenk (2010.03.05.08:00 CST) –

FL: Bill – Try looking at the
Segregation model in the Social Science section of the models library. To
me, this looks like a set of agents with a simple control system.

BP: I agree. The verbal description of the Party program does indicate
control systems. I assume the Segregation model works about in the same
way.

173adb8.jpg

This would be easy to set up as a control-system demo. I have most of it
finished, but am suffering brain fag – tomorrow, maybe.

The controlled variable is the percentage of people of the opposite sex
that is perceived. The reference level is the maximum number considered
tolerable (any number greater than that is an error, below it is zero
error). An error signal causes a random change in the group to which the
person belongs. The random changes go on until some fraction of the
people (up to all of them) have zero error.

FL: How would you change this
simulation to be more consistent with PCT? This will help highlight
where you and I are still thinking differently.

In general, I suspect that Basic and Pascal don’t have functions for
“agent-wants” or “agent-perceives” either. The reference levels and
perceptual functions have to be programmed.

BP: It’s pretty easy: error = r - p. R stands for “agent
wants”, p for “agent perceives.”

FL: On to your criticism of my
philosophical approach, or rather, that I am taking one BEFORE doing the
modeling to prove it.

BP: Sorry about that. I was lecturing.

FL: In general, I find myself
agreeing with you. One of my criticisms of most of the agent-based
economics models I have seen is that they assume their conclusions, that
they are toy models with no grounding in reality.

Let me try again then. I observe the existence of families. I
observe that nearly everyone comes from a family, if not now then at some
point in their past. I don’t observe families operating in total
isolation. I don’t observe individuals operating in total
isolation. I want to build a model that explains how families
survive and reproduce over generations. I hypothesize that this requires
members of the family setting reference levels for and developing
perceptions of family health and safety.

BP: Why would they care about family health and safety? If you can offer
a very plausible possible reason for caring that isn’t just “Because
that’s only natural,” you might have the start of a model.

FL: I hypothesize that one
family cannot do everything necessary to assure their survival. I
hypothesize that some kind of cooperation among families is required, or
at least beneficial.

BP: Then set up a model in which it seems true that cooperation would be
required or beneficial, and then show that the model will develop
cooperation all by itself and not because it knows that cooperation would
be good.

FL: I hypothesize that
such cooperation will be applied to the realm of satisfying material
needs.

BP: There’s a hint. “Material needs” – do you mean learned
needs or intrinsic variables? How about an environment in which
there are many Altertnative things the family can learn to control as a
means of controlling things they HAVE TO control such as eating?

FL: I define that realm as “the
economy.” I hypothesize that even within an overall cooperative
scheme, there will still be competition and conflict. I observe and
then specify a certain level of technology and a certain legal and
institutional framework. I hypothesize that divisions of labor,
differences in roles, differences in status will
develop.

BP: I misunderstood “hypothesizing” at first, thinking you
means things you would put in the model as accomplished facts. But I
think you’re just predicting what a successful model should be able to
do, OK?

FL: I hypothesize that on
certain key measures (distribution of income, for example), the simulated
economy will function in a manner similar to the real
economy.

That’s what you hope will happen when you turn the model on.

FL: At the center of all those
hypotheses is another one – that PCT-based agents are what make all of
those hypotheses plausible.

We both hope the model will do that!

FL: As the agents control their
perceptions by changing behaviors, references, and even their perceptual
functions themselves, they will discover the kinds of cooperation and
competition that make sense for their artificial world.

Yes, that’s it.

FL: My ultimate hypothesis,
then, is that if I populate a sufficiently realistic artificial world
with sufficiently realistic agents, I should get sufficiently realistic
behavior for the model to be useful to understanding the real, not just
artificial world.

Absolutely, that will be the result that can convince people. When you
want it, I’ll give you the reorganization algorithm that I use so your
agents can change their organization in ways that depend on the results
of behavior.

FL: Is this any
better?

Yep.

Best,

Bill P.