Social Reference Drift

[From Rick Marken (2004.06.07.1130)]

I am going out of town for a couple weeks and will have only
intermittent contact with the net. But before I go I'd like to post an
Excel model of Social Reference Drift so that you can examine and
critique it.

The attached spreadsheet, SocialReorg2.xls, actually implements 6
different models of reference change based on social interaction.
Here's how it works.

The first column in the spreadsheet contains the references for the
perception "Hi" for a population of 100 individuals. The reference for
each individual is a random number between 0 and 99. These references
can be viewed as different specifications for a perception of "Hi". The
assumption is that there are 100 different legitimate ways to say "Hi"
(100 different possible references for the perception "Hi") and each
person in the population want to say "Hi" in one of these ways.

The second column in the spreadsheet shows the reference for each
individual after these individuals have been interacting for a number
of iterations (the number of iterations -- or generations -- being an
input to the program in cell D3; the spreadsheet comes with the number
of iterations set to 200). The individuals interact for the specified
number of iterations when the "Run" button below the graph is pressed.

On each iteration of the program (after the "Run" is pressed) the are
100 "greetings" between randomly selected pairs of individuals in the
population. For each of these 100 greetings, one member of the
population is randomly selected to be the "greeter" and another to be
the "responder". The program makes sure that "greeter" and "responder"
are never the same individual.

The "Model Type" selected for a run determines what happens to the
reference of the greeter and responder after each "greeting". The type
of model is entered into cell H1. The spreadsheet comes with the model
type set to 1, specifying a "G pure control" model. The "G pure
control" model updates the greeter's reference based on the difference
(error) between the greeter's and the responder's way of saying "Hi".
(It is assumed, as it is in Bill William's program, that the reference
corresponds exactly to the output that is produced and perceived. So
error,e, is just the difference between greeter and responder
references). The "pure control" model then updates the greeter's
reference using pure integration: eref(greet) = eref(greet) + e, where
eref(greet) is the greeter's reference on the current iteration.

So the G in the description of Model Type means that only the greeter's
reference is updated after a greeting. G-R in the description of model
type means that the references of both greeter and responder are
updated after a greeting.

The "leaky control" model uses a leaky rather than a pure integration
to update the reference: eref(greet) = eref(greet) + 0.01 * (10 * e -
eref(greet)). The "reorg" model is a "sort of" reorganization model.
The model updates references probabilistically based on the size of the
error; the larger the error, the greater the probability of changing
the reference. It's not real reorganization because when the reference
is updated it is always updated in the "right" direction -- one that
brings the greeter's reference closer to the responder's reference.

Each time you press "Run" the program performs the user specified
number of iterations, running through 100 randomly paired "greetings"
during each of these iterations and updating the references of each
individual involved in the greeting as specified by the model.

At the end of each iteration, the references of all individuals in the
population "drift" randomly in different directions. The average size
of this drift is determined by the value entered for Disturbance
Amplitude (cell F1). If this cell is set to 0 then there is no random
drift at all in references. The value is currently set to 2 (2% of the
range of reference values) which is pretty small but large enough to
keep some of the models from producing results that are "too perfect".

When the run is over the "Reference Distribution" graph shows the
relative frequency of the different references in the population at the
start (Initial) and at the end (Final) of the iterations of
interactions. The distribution of references changes as a result of
these interactions. The changes are different depending on the kind of
model used to update the references. What I find is the following:

The pure control model that updates only the greeter (G pure control)
converges to a narrow distribution of references, the center of this
distribution being different on each run. This seems similar to what
happens in "cultural evolution". Interactions between individuals
results in convergence to some particular way of doing things, but
where the convergence ends up seems random. The population can end up
converging to low, intermediate or high values of references for "Hi".
Where the drift goes is a matter of who tends to interact with who in
the population and this interaction is currently determined randomly.

The leaky control model that updates only the greeter (G pure control)
always converges to a narrow distribution at the low end of the
reference values. This doesn't seem to capture cultural evolution.
Same with the reorg model that update only the greeter (G reorg). This
model (when given enough iterations) seem to converge on intermediate
reference values.

When the references of both greeter and responder are updated, none of
the models seem to give a good representation of cultural evolution.
The pure control model (G-R pure control) doesn't converge; the Final
distribution of references in the population is as flat as the Initial
one. The leaky control model (G-R leaky control) still converges always
to a low reference value and the reorg model (G-R reorg) still
converges to intermediate values.

This, of course, is all very preliminary. I built this model only to
demonstrate an approach to looking at how models of interacting
individuals can be used as the basis of modeling social processes. In
this case, individuals are modeled as control systems that want to
speak (have references for speech perceptions) like others. In one
case, only the greeters change what they want to hear themselves say
(change their reference for "Hi" ) based on how others respond to their
greeting. In the other case, both greeters and responders change what
they want to hear themselves say based on what each member of the dyad
says.

The Visual Basic program that runs the model can be seen by going to
the Visual Basic tool in Excel. But I'm posting it below just in case
people have trouble finding it. It's not pretty; I can already see
ways to tidy it up. But I did this in some haste and I don't have time
to clean it up right now.

I'd like to see what people think of the approach before continuing on
with it. I look forward to hearing constructive suggestions regarding
ways to improve the model.

Best regards

Rick

SocialReorg2.xls (60 KB)

···

---
Sub RunSheet()

Randomize Timer

Dim iref(100), eref(100), f1(20), f2(20)
  Application.ScreenUpdating = False

' Initialize References

For i = 1 To 100
Cells(i + 1, 1) = Rnd(3) * 100
iref(i) = Cells(i + 1, 1)
Cells(i + 1, 2) = iref(i)
eref(i) = iref(i)
Next

Rem Calculate Initial Frequency Distribution

For i = 1 To 100
m = Int(iref(i) / 5) + 1
f1(m) = f1(m) + 1
Next i

For i = 1 To 20
Cells(i + 1, 10) = f1(i) / 100
Next i

' Main loop

' Get model type

  model = Cells(1, 8)

'Get number of iterations, ni, and disturbance amplitude, amp

ni = Cells(1, 4)
amp = Cells(1, 6)

For i = 1 To ni

For j = 1 To 100

greet = Int(Rnd(3) * 100) + 1
sample:
respond = Int(Rnd(3) * 100) + 1
If greet = respond Then GoTo sample

On model GoTo adapt1, adapt2, adapt3, adapt4, adapt5, adapt6

'Adapt routine 1: greet changes toward respond
' pure integration

adapt1:

e = eref(respond) - eref(greet)
eref(greet) = eref(greet) + e
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1

GoTo endloop

'Adapt routine 2: greet changes toward respond
'leaky integration
adapt2:

e = eref(respond) - eref(greet)
eref(greet) = eref(greet) + 0.01 * (10 * e - eref(greet))
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1

GoTo endloop

'Adapt routine 3: greet changes randomly with probability proportional
to e

adapt3:

e = eref(respond) - eref(greet)
'compute probability of change
pr = Abs(e) / (eref(respond) + eref(greet))
If Rnd(3) < pr Then
eref(greet) = eref(greet) + Rnd(3) * e
End If
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1

'Adapt routine 4: greet and respond changes toward respond
' pure integration

adapt4:

e1 = eref(respond) - eref(greet)
e2 = eref(greet) - eref(respond)
eref(greet) = eref(greet) + e1
eref(respond) = eref(respond) + e2
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1
If eref(respond) > 99 Then eref(respond) = 99
If eref(respond) < 1 Then eref(respond) = 1

GoTo endloop

'Adapt routine 5: greet and respond changes toward respond
'leaky integration
adapt5:

e1 = eref(respond) - eref(greet)
e2 = eref(greet) - eref(respond)
eref(greet) = eref(greet) + 0.01 * (10 * e1 - eref(greet))
eref(respond) = eref(respond) + 0.01 * (10 * e2 - eref(respond))
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1
If eref(respond) > 99 Then eref(respond) = 99
If eref(respond) < 1 Then eref(respond) = 1

GoTo endloop

'Adapt routine 6: greet and respond changes randomly with probability
proportional to e

adapt6:

e1 = eref(respond) - eref(greet)
e2 = eref(greet) - eref(respond)
'compute probability of change
pr1 = Abs(e1) / (eref(respond) + eref(greet))
pr2 = Abs(e2) / (eref(respond) + eref(greet))
If Rnd(3) < pr1 Then eref(greet) = eref(greet) + Rnd(3) * e1
If Rnd(3) < pr2 Then eref(respond) = eref(respond) + Rnd(3) * e2
If eref(greet) > 99 Then eref(greet) = 99
If eref(greet) < 1 Then eref(greet) = 1
If eref(respond) > 99 Then eref(respond) = 99
If eref(respond) < 1 Then eref(respond) = 1

endloop:

Next j

'Add noise to population references

For m = 1 To 100
eref(m) = eref(m) + (Rnd(3) - 0.5) * amp
If eref(m) > 99 Then eref(m) = 99
If eref(m) < 1 Then eref(m) = 1
Next m

Next i

For i = 1 To 100
Cells(i + 1, 2) = eref(i)
m = Int(eref(i) / 5) + 1
f2(m) = f2(m) + 1
Next i

For i = 1 To 20
Cells(i + 1, 11) = f2(i) / 100
Next i

Application.ScreenUpdating = True

End Sub

Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

From[Bill Williams 8 May 2004 2:36 PM CST]

[From Rick Marken (2004.06.07.1130)]

Rick in an interesting description of his model mistakenly says,

(It is assumed, as it is in Bill William's program, that the reference
corresponds exactly to the output that is produced and perceived.

Rick's misundestanding may be based upon his insisting upon misreading my program by focusing upon the output in my program p being assigned to two arrays Ix[x], and Vx[x]. Then it seems Rick came to the mistaken conclusion that since earlier in the loop I said, r := Ix[x], and p := Vx[x] . Thus it appeared as if r had to equal p. This was the unintended result of my reusing variables inorder to save memory. In the course of the loop when the variables were reused, they came to have a different meaning than Rick assumed, when he neglected to take into account all that was happening during the loop.

Had I used more memory by providing a new name for each step in the program there wouldn't have been an occassion for the mistake Rick made. As yet neither Rick nor Bill Powers have caught onto a genuine mistake that I made. It doesn't affect how the program runs but, it does unfortunately indicate that I didn't fully undertand how my own program worked either. Programing seems to be an activity that leads to superstitions, especially when it is carried on by those who are primairly self-taught, or have learned to program outside a formal course of instruction, as are all those contributing to this discussion. That isn't to say that it is impossible to learn to program outside the context of formal instruction, but rather that having learned outside such a context the understandings developed are likely to include quirks and partial mis-understandings regarding programing methods. Now thanks to the recent discussion and my reviewing and experimenting with the code, I underst!
and my own program better. However, I am less convinced than ever, that programing simulations offers a fool proof path to enlightenment. Actually, I ought to rephrase that to say that, "The problem isn't a matter of being fool proof or not. The problem is that the people causing the problems aren't fools. But they are never-the-less determined to make mistakes."

Bill Powers likes to think that sophistology can be avoided by focusing on doing the "real work" of writing models. But, as this discussion illustrates the sophistology goes on even when the context is details of a programs code. So, does Bill Powers typical _ad homien_ style of argumentation, where his opponates get described as doing stuff that is "shockingly stupid" like not neccesearily agreeing with Bill Powers.

As, an illustration of this, Bill Powers re-writes my program and then charges me with a mistake that he created in re-writing the program. This is very much like an earlier mistake that Rick made when looked at a portion of the code, and close the loop in his mind while looking only at the control loop and conluded that the agent's perception must be to equal the reference level because I had written Vx[x] := p, and Ix[x] := p, therefore when r := Ix[x], and p := Vx[x] that r must be equal to p . But, not so. Bill Powers now claims that if the "calculate" routine is closed that it will generate the forming up behavior all on its own. So, the proceedure isn't needed, and therefore it isn't a PCT correct program. To reach this conclusion however,
Bill Powers made a rather drastic revision of the program by expunging the control routine, and then considering the behavior of the program without the control proceedure. What this creates is an immaginary system that corrects an error by assigning on each step a precise adjustment of the agent to the agents perceived reference value. Now, if, and I say IF, I had written the program they way Bill Powers rewrote the program, then I would have made the mistake that Bill Powers claims that I made. But, since I didn't write the program this way, I didn't make the mistake, that Bill Powers mistakenly claim that I made. Billl Powers is using a "I am fucking the pig, and it is your fault." style of argument. To see if the control loop in my program does anything the loop gain can be set to zero in my program and it can be seen if the control loop actually makes any difference to the way the program functions-- when, that is, the program is the one that I wrote. If the agents!
  in my program, my program mind you, are not controlling-- that is their gain is set to zero, then the band doesn't form up.

Geoff Hodgson in his recent, (2004) book _The Evolution of Institutional Economics, Agency, Structure and Darwinism in American Institutionalism_
describes a program that seems somewhat like Rick's. At least the reports of the behavior of the models involved appear to be similiar. In Hodgson's model drivers start out at random driving around a circle, this leads to collisions, and adjustments in the drivers ideas about which way to drive about the circle.

Hodgeson, however, still retains something of the notion that agents can be as he says, "influenced", "molded" or "shaped" by a culture. Despite his adamant rejection of behaviorism when he confronts it directly, he hasn't as yet apparently reflected upon the physiological realities involved and the conflict between a notion of cultural (semi-soft) determinism and the realities of a causal process.

A comment on Rick's model. Rick's greeting is amounts to agents adjusting to each others mode of greeting. Drifts in language, from what I can understand are, while they include a random element, ordinarily more than a random process. When the Normans invaded England, there was an incentive for a Saxon to adjust to Norman norms. It would appear that this effect could be added to the greeting program by assigning more weight to the way some agent's say "Hi" than to other less important agents.

The part of the story about why some agents "Hi" had more weight than other would still remain outside the program's code, but there are it seems clear processes that can be modeled that involve more in the way of interaction between agents than has been the case in regard to most previous modeling efforts. However, it seems to me that Tom Bourbon's experiements with coordinated tasks could be revived and introduced. Rather than in my program the computer simulating all of the band members, an experimental subject could be given a joystick and a display with distances to the experimental subjects guide agents. then the computer could handle 1023 agents and the experimental subject could "really" control the position of one agent. Then it would be implausible that the computer knew the value of the experimental subjects reference value. All, the computer would know would be how the experimental subject's moved the joystick. In this case the experimental subject would be ca!
rrying out both the control routine and also the compute reference value routine.

Bill Williams

[From Bill Powers (2004.06.08.1453 MDT)]

Rick Marken (2004.06.07.1130) --

Best wishes for your presentation at Oxford!

I am going out of town for a couple weeks and will have only
intermittent contact with the net. But before I go I'd like to post an
Excel model of Social Reference Drift so that you can examine and
critique it.

Thanks for posting the Basic source code. My OpenOffice spreadsheet does
one run on startup and then doesn't seem to do any more when I click on
Run, even if I change the type of model. The calculation option is set to
Step = 100; Min Change = 0.001. Don't know what else there is to adjust or
click on.

The model seems to be mainly a test of reorganization, with reorganization
being only a probability of change in the right direction. There are no
control systems, so no parameters to vary. I don't see offhand how to set
up a nontrivial version involving control -- we could define some
"intrinsic" consequences of not being understood or not understanding, but
it would all come back to an inevitable convergence if any at all. Whatever
you say is the right direction becomes the direction of change -- the same
model would produce results if you said the right direction is anything
less than the right number, anything greater, anything that brings the
greeter's number to 1/3 of the responder's number, and so forth. And the
model would do that. So linguistic convergence would happen only if you
said that what reduces error is linguistic convergence, defined however you
want to define convergence. Somehow this doesn't sound like an explanation
of convergence.

Maybe a more informative model (but harder to write) would involve people
trying to control some variable together by means that require
communication. They could use words made of three letters: two consonants
with a vowel between them. The words would indicate requested actions, of
which there would be an assortment having different effects on a set of
controlled variables. So "pit bab" could indicate lifting (pit) the right
end ( bab) of the object, and "pot bib" could mean push down (pot) on the
left end (bib). If the right thing isn't done, the controlled variable
would fail to match the reference levels of either or both parties, who
would then randomly revise their dictionaries. in the e. coli manner
(frequently or infrequently depending on the amount of error in the
controlled variable)..

I'm sure a linguist interested in this sort of modeling could come up with
a more interesting situation to model. The setup has to have enough
complexity to make the result non-obvious, if we're going to get out of it
anything but a restatement of what we put into it..

Best,

Bill P.

[From Bill Powers (2004.06.08.1620 MDT)]

Bill Williams 8 May 2004 2:36 PM CST --

As yet neither Rick nor Bill Powers have caught onto a genuine mistake
that I made.

I haven't looked for it yet, being more concerned with understanding just
what is and is not controlled in this program.

Please note: I do not have a low opinion of your programming abilities even
when I think you have made mistakes. I make mistakes, too, as do all
programmers. We help each other out by finding them. Any doubts I may have
had were settled by your "interlocking directorates" program, written in a
language that is still beyond me. I do not confuse inexperience with
inability. You are inexperienced, but highly capable.

Bill Powers likes to think that sophistology can be avoided by focusing on
doing the "real work" of writing models.

Sophistology can't be avoided if you think that the perceptual input
function is an unnecessary philosophical addition to the control system
model. You are flat wrong about that. What you say I think, above, you just
made up out of thin air. But thanks for telling me what you worry about.

So, does Bill Powers typical _ad homien_ style of argumentation, where his
opponates get described as doing stuff that is "shockingly stupid" like
not neccesearily agreeing with Bill Powers.

Even discounting your dyslexia, you are wrong about my attitude toward
disagreements. I think you will have to go back a long way to find a
passage of mine that anyone could interpret, even through misreading it, as
rebutting a disagreement by calling it "shockingly stupid." Your way of
randomly selecting quotations and then altering them to suit your purposes
does not, I hope, convince anyone that I am anything like what you describe.

As, an illustration of this, Bill Powers re-writes my program and then
charges me with a mistake that he created in re-writing the program.

Is that how you understood what I did? Then let me explain it again. You
claimed that the parts of your program that compute new reference levels on
each iteration are part of a control process that achieves uniform spacing
of the actors in your program. By eliminating the control code from your
program, I showed that you were really just computing the positions of the
actors open-loop -- whether this was accomplished by inserting control
systems or simply by setting the positions to the computed values was
irrelevant. The actual control processes made the actor's positions equal
the computed reference positions, to be sure, but the controlled variables
of those systems were positions, not uniformity of spacing. There is no
control system for uniformity of spacing -- unless you are proposing that
an open-loop process can legitimately be called control..

I also explained how to determine whether a variable is controlled or not.
You apply disturbances directly to it, and see if there is some part of the
system that responds by creating an equal and opposite effect on the same
variable, leaving no net effect (or very little). Your position control
would pass that test. Your program's "control" of equal spacing would not.

A disturbance, by the way, is not just a change that is inserted on a
single iteration and then disappears, as in your earlier versions of this
program. To see how a control system counteracts disturbances you have to
apply the disturbance long enough (for a sufficient number of iterations)
for the action of the controller to oppose it and cancel its effects while
it is still being applied.

This is very much like an earlier mistake that Rick made when looked at
a portion of the code, and close the loop in his mind while looking only
at the control loop and conluded that the agent's perception must be to
equal the reference level because I had written Vx[x] := p, and Ix[x] :=
p, therefore when r := Ix[x], and p := Vx[x] that r must be equal to p
. But, not so.

True, because after resetting the reference level to equal p (in procedure
"loop"), you immediately recompute it (in procedure "calculate"). The
reference positions are computed by using ex[x] and ey[x] as temporary
variables for averaging neighboring values of Ix and Iy. Then you copy the
final values of Ex into Ix, substituting that value for the value of p that
had been put into Ix by the "Loop" procedure. So there was no need to store
p in Ix, as far as I can see. But it didn't waste much time, so I didn't
mention it. Is that the mistake you're talking about?

Bill Powers now claims that if the "calculate" routine is closed that it
will generate the forming up behavior all on its own. So, the procedure
isn't needed, and therefore it isn't a PCT correct program.

That's not what I said. Of course you need to make the perceptual signal
match the reference signal, because it's the perceptual signal you take as
indicating the position of the agent. I simply made the perceptual signal
match the reference signal directly, instead of doing it through a control
system. Since the same pattern appeared, this shows that it is not the
control process that creates the pattern, but the computations of reference
signals, which are done by the "compute" routine, not the "loop" routine.
The "compute" routine has no control systems in it: only an open-loop
averaging process.

To see if the control loop in my program does anything the loop gain can
be set to zero in my program and it can be seen if the control loop
actually makes any difference to the way the program functions-- when,
that is, the program is the one that I wrote. If the agents in my
program, my program mind you, are not controlling-- that is their gain is
set to zero, then the band doesn't form up.

Of course not, because the actors' positions are not made equal to the
reference positions when the control systems cease to work. But that does
not prove that the patterns come out of the control process. They do not:
they exist in the reference signals. The control systems in your program
just make the positions conform to the reference signals (except for
instabilities).

Come on, Bill. You're just being stubborn.

Best,

Bill P.