Love and Hate

[Martin Taylor 2003.12.07.1945 EST]

[From Bruce Gregory @003.12.07.1818)]

Rick Marken (2003.12.07.1440)

I replied by saying "The relative gain of same-level systems is not
really part of the process of hierarchical control" because I don't
consider relative gain to be an essential consideration when building
a working HPCT model.

O.K. Then how does my traffic example work if one control system does
not overwhelm the other?

I thought I explained that. One DOES overwhelm the other, but it has
nothing to do with differential gain. (I did misinterpret you to have
been talking about changes in gain, but that doesn't change anything
about the fact that the relative gains are pretty near to irrelevant).

The "getting to work on time" control system experiences little
increase in its (currently near zero) error if you brake, but the
"distance to car in front" control system experiences a rapid and
large increase in error if you don't. If two systems have equal gain,
the one with greater error will have greater output. The foot goes on
the brake.

Martin

[From Rick Marken (2003.12.07.1720)]

Bruce Gregory (2003.12.07.1818)--

Rick Marken (2003.12.07.1440)

I replied by saying "The relative gain of same-level systems is not
really part of the process of hierarchical control" because I don't
consider relative gain to be an essential consideration when building
a working HPCT model.

O.K. Then how does my traffic example work if one control system does
not overwhelm the other?

Your traffic example is an observation: people will brake for the car
in front when on the way to work. Now you're asking a question about a
model. I can't tell what model you are talking about unless you show
it to me as a functional diagram, a program or a set of equations.

What is essential to the process of hierarchical control is what I
mentioned earlier: that different _types_ of perceptual variable be
controlled by each level of the hierarchy and that control systems at
the same level control independent degrees of freedom of perceptual
variables from the next lower level.

Can I then assume that the systems controlling whether my foot is on
the accelerator or on the brake involve different degrees of freedom?
Both systems _seem_ to be controlling the location of my foot. What
subtlety have I failed to grasp?

Actually, it's I who has failed to grasp some subtlety because I no
longer have any idea what we're talking about. As I recall, you had
asked whether my spreadsheet hierarchy was a "realistic" model of your
traffic example. Once I found out what your traffic example was
(stopping for stopped cars on the way to work), I said that the
spreadsheet was not a realistic model of that situation (I assumed that
by "realistic" you meant a model that could behave as drivers do when
confronted with stopped cars while driving to work) because it was not
designed to account for that particular behavior. I said I thought a
realistic model of the traffic situation may be the one used to model
the agents in the Crowd model. Now I don't know where we are. Do you
think the agent model in the Crowd program is not a realistic model of
the driver in your traffic situation? If not, could you refresh my
memory of that model by providing a functional diagram of it? Do you
think that model is not a real hierarchical control model for some
reason? What subtlety have I failed to grasp?

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[Bruce Nevin (2003.12.07 21:31 EST)]

(Bill Williams 7 December 2003 4:47 P.M)

contrary to the assertions of the cultural
relativists who

would deny the existence of any values that have anything other than

an arbitrary cultural source, there of late many people who are

seeking to locate a non-arbitrary foundation for values. I think
many

of these people would say that, language included, all human
behavior

can be understood in terms of a process of value creation, and that

these values are not merely conventional. The question of how such

values could arise, they might say, is identical to the question of

how could life arise.

‘Cultural relativism’ seems to claim that cultural values are unboundedly
arbitrary. In the control-theory model (beginning with Martin’s ideas
about convergence), cultural conventions arise out of the interactions,
over time, of autonomous control systems. Convention constrains the
manner of controlling perceptions (reducing degrees of freedom)
and establishes in the participant sequence-level references for the most
conventionalized interactions. This makes it easier to identify the
intentions of other participants and facilitates cooperation. It is an
important basis for the perceptions to which we refer with words like
trust and, yes, to some extent, fairness. People who don’t
follow (or don’t know) the conventions of the community are not
immediately trusted, as are people who have behaved conventionally but
have turned out not to have the intentions that go with them. Cultural
standards seem arbitrary from outside the community but are necessary
within it. Their arbitrariness is not without limit; they facilitate
social ends, perceptions which are difficult or impossible to control
without cooperation of others, else the participants in the community
eventually cease re-establishing them in the course of their
interactions.

This is indeed related to the question of how life arises. Any
arrangement that reduces disturbances and makes the environment more
predictable for the organism supports its survival and the survival of
its genetic descendants. This is why life is anti-entropic, building up
more complex forms rather than breaking down, as other physical systems
do, toward the equilibrium of ‘heat death’.

So if this is what they mean by ‘value creation’ maybe they’re on to
something and maybe PCT has something they’re willing to be interested
in.

    /Bruce

Nevin

···

At 05:14 PM 12/7/2003 -0600, Williams, William D. wrote:

[From Bill Powers (2003.12.07.1847 MST)]

Bruce Gregory @003.12.07.1818)–

O.K. Then how does my traffic
example work if one control system does

not overwhelm the other?

I can tell you how the control systems interact in the Crowd demo, though
I don’t know what you mean by “overwhelm.” Perhaps you can tell
me if the following involves one system doing something called
“overwhelming” to another, and if the Crowd setup would seem
appropriate for application to the driving problem, given suitable
changes in parameters.
In the crowd program, the destination-seeking system computes proximity
to the destination according to an inverse-square-of-distance curve. As a
result, perceived proximity is nearly zero when the agent is far from the
destination, and the proximity error is close to 255 (since the reference
signal is set to that number, the maximum possible). Most of the way to
the destination, the error remains close to the maximum value, and the
speed, which is proportional to the error, is roughly constant. Only when
the destination-seeking agent is within perhaps 50 to 100 pixels of the
destination circle does the proximity error, and therefore the
speed, begin to decrease significantly. The gain of this control system,
which sets the ratio of speed to error. is low enough so that with
maximum error, progress toward the goal is at a moderate speed. I don’t
know how realistically you would say that this represents the way you
drive to a distant destination, but it does account for the limits on
speed when the error is very large, and the final decrease in speed when
the goal position looms close ahead. The proximity perception, by the
way, was based on the apparent visual area of objects as one approaches
them, which follows an inverse-square-of-distance law.
This agent also avoids collisions with other objects. The proximity curve
used for collision avoidance is steeper, so the perception of proximity
to an obstacle stays low until the obstacle is near, and then rises
sharply. There is a reference level for maximum permissible proximity to
an obstacle, set to something like 80 to 100 (with maximum proximity
again being 255 when at the point of collision). The collision avoidance
control system uses one-way control; that is, only proximities greater
than the reference level lead to avoidance actions. If the reference
level is set low, avoidance begins while the agent still far from the
obstable. A high setting of the reference signal would delay avoidance
until the object was close ahead. Avoidance consists of turning left or
right, the angular speed of the turn being proportional to the
error and the sign of turning being set by whether total left proximity
(from all objects) is less than or greater than total right proximity.
Also, the proximity error reduces the speed by an amount proportional to
error. That’s why the agent slows down when squeezing between two
obstacles. If the gain of that part of the loop is set too high, the
agent will stop instead of going between two obstables close
together.
When a collision becomes imminent, the destination-seeking system does
not change its output signals at all. The speed signal stays the same and
the directional signal remains proportional to the angle between the
direction of travel and the direction to the destination (well, not quite
proportional – there’s cardiod pattern that reduces sensitivity to
proximities to the rear of the agent).
The speed signals from the collision-avoidance and destination-seeking
signals add, so the total speed is proportional to their sum. The
direction signal is the integral of the summed outputs of the two control
systems, so while the direction to the destination remains about the
same, the direction error changes by a large amount as the agent turns to
avoid the collision. However, the contribution to the total direction
error signal from the collision avoidance system is somewhat larger, and
while collisions are near, is the winning contribution to the total
direction error signal and determines how fast the agent will turn. After
the collision has been avoided, the collision proximity error drops below
the reference signal and the destination seeking system becomes the only
determinant of the direction error, which is then corrected. The course
toward the destination continues. You could say that the output of the
collision-avoidance system overwhelms, or overcomes, the output of the
destination-seeking system when the two are not aligned.
Note that there is no direct action by the collision-avoidance system on
the destination seeking system. These are two independent higher-order
systems acting through a common set of lower-order systems. The
destination seeking system continues to operate normally, trying to
produce changes in the outputs that will correct its own errors. The
collision-avoidance system does not make the destination-seeking system
stop working; it simply produces somewhat more output than the
destination-seeking system can produce. This is largely a matter of how
the relative loop gains are adjusted, and where the reference signals are
set.
I think a hierarchical control system similar to this could be applied to
the braking example, although the details of the actuators would change.
We could combine the braking system with the accelerator system, so that
for small proximity errors in the collision-avoidance system the
accelerator pressure would be reduced, and for larger errors the foot
would transfer to the brake and a braking force proportional to the error
would be produced. It would probably be necessary to include rate of
change of error, so that sudden changes in error would produce more
braking than gradual ones, for the same amount of error. The destination
seeking system would operate by applying positive foot pressure to the
accelerator when the foot was available, with perhaps still another
system that limits speed contributing to the same speed reference signal.
It would go on trying to apply foot pressure to the accelerator even when
the braking system was acting – without effect.
Once the principles of hierarchical control are understood, the design
possibilities are endless. You can see that a large amount of detailed
design went into the Crowd program. There is no single design, of course,
that would apply to every possible controlled variable or every
environmental situation. The design details have to be worked out from
observing (or imagining) real behavior in real environments. The main
insight that made the Crowd program feasible was the realization that one
controlled variable in such situations probably has to do with perceived
proximity rather than linear distances. A natural measure of proximity
would be visual area, or brightness, both of which are inversely
proportional to the square of the distance to an object (sound intensity,
too, and odor concentrations). This took care of a number of problems,
such as the question of why speed remains constant until the destination
is nearby. If distance were being controlled, the position error would
simply increase linearly with distance and we would find the approach
being fastest for the largest distances, half as fast for half the
distance, and so on – which doesn’t happen. Of course one can postulate
nonlinear perceptual or output functions, but it’s much nicer to find a
natural explanation for the nonlinearity. The Crowd program you are
familiar with was about revision 10 from the first one, by the way. The
first one tried to use control of distance, and was a flop. Another
version actually generated a map of the obstacles in a 360-degree circle,
as seen by the agent.
As you can appreciate, the general hierarchical control model only
establishes the principles of hierarchical control. The details for any
specific behavior have to be filled in from knowledge of the physical
situation and estimates of what variables are likely to be controlled.
One must always be looking for physical principles that can be used to
keep the model from being just a collection of arbitrary formulas, and
one must stick to the basic concepts of hierarchical control (not too
slavishly, of course).
Some people, I know, are confused when they describe some situation and I
say, “I don’t know – I haven’t modeled that one.” But, they
say, doesn’t your model apply to all behavior? Yes, of course, but to
actually produce a working model takes more than a diagram on the back of
an envelope. You have to decide on what perceptions are involved, what
physical laws, what means of acting, and so on, and these are different
for every new kind of case. This doesn’t mean that the model doesn’t
apply – but it doesn’t apply itself. Someone has to do the work
of applying it.

Best,

Bill P.

Bruce,

Based on your reply, I think to some extent I managed to communicate.
After posting, I wondered will this be understood in the sense I
intended. I think it was.

I would say that many of the people I described ought to find in
control theory a body of work that would support and assist them
in furthering the development of ideas, practice and even perhaps
an ethic. For the most part the analysis of human experience is
still split into two supposedly quite different domains. One is
the objective world of scientific analysis, and the other the
of axiology or value theory. A control theory analysis, however,
includes both an analysis of behavior as well as an account of
that behavior in terms of the values and structures ( the reference
level and the control loops ) that supply meaning to the actions.

I think the realization of this integration is hindered by a current,
not entirely unjustified, fear of science. There is, of course, the
counterpart fear on the part of scientists who are afraid of the
idea that behavior can be purposeful. And, I'm convinced that
it isn't likely that someone is going to genuinely understand
control theory without an extensive experience working through
problems and observing control theory experiments or demos. I once
thought that this experience had to be the sort of thing you would
do with electronic circuits and an osciloscope, or programing control
processes on a computer. But, you seem to understand the theory quite
well based upon working through these issues in the context of
linguistics.

best

Bill Williams

···

-----Original Message-----
From: Control Systems Group Network (CSGnet) on behalf of Bruce Nevin
Sent: Sun 12/7/2003 8:31 PM
To: CSGNET@listserv.uiuc.edu
Subject: Re: Love and Hate

[Bruce Nevin (2003.12.07 21:31 EST)]

(Bill Williams 7 December 2003 4:47 P.M)
At 05:14 PM 12/7/2003 -0600, Williams, William D. wrote:

contrary to the assertions of the cultural relativists who
would deny the existence of any values that have anything other than
an arbitrary cultural source, there of late many people who are
seeking to locate a non-arbitrary foundation for values. I think many
of these people would say that, language included, all human behavior
can be understood in terms of a process of value creation, and that
these values are not merely conventional. The question of how such
values could arise, they might say, is identical to the question of
how could life arise.

'Cultural relativism' seems to claim that cultural values are unboundedly
arbitrary. In the control-theory model (beginning with Martin's ideas about
convergence), cultural conventions arise out of the interactions, over
time, of autonomous control systems. Convention constrains the manner of
controlling perceptions (reducing degrees of freedom) and establishes in
the participant sequence-level references for the most conventionalized
interactions. This makes it easier to identify the intentions of other
participants and facilitates cooperation. It is an important basis for the
perceptions to which we refer with words like trust and, yes, to some
extent, fairness. People who don't follow (or don't know) the conventions
of the community are not immediately trusted, as are people who have
behaved conventionally but have turned out not to have the intentions that
go with them. Cultural standards seem arbitrary from outside the community
but are necessary within it. Their arbitrariness is not without limit; they
facilitate social ends, perceptions which are difficult or impossible to
control without cooperation of others, else the participants in the
community eventually cease re-establishing them in the course of their
interactions.

This is indeed related to the question of how life arises. Any arrangement
that reduces disturbances and makes the environment more predictable for
the organism supports its survival and the survival of its genetic
descendants. This is why life is anti-entropic, building up more complex
forms rather than breaking down, as other physical systems do, toward the
equilibrium of 'heat death'.

So if this is what they mean by 'value creation' maybe they're on to
something and maybe PCT has something they're willing to be interested in.

         /Bruce Nevin

[From Bruce Gregory (2003.12.07.2300)]

Rick Marken (2003.12.07.1720)

Bruce Gregory (2003.12.07.1818)--

Rick Marken (2003.12.07.1440)

I replied by saying "The relative gain of same-level systems is not
really part of the process of hierarchical control" because I don't
consider relative gain to be an essential consideration when building
a working HPCT model.

O.K. Then how does my traffic example work if one control system does
not overwhelm the other?

Your traffic example is an observation: people will brake for the car
in front when on the way to work. Now you're asking a question about a
model. I can't tell what model you are talking about unless you show
it to me as a functional diagram, a program or a set of equations.

You win.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bruce Gregory (2003.12.07.2255)]

[Martin Taylor 2003.12.07.1945 EST]

The "getting to work on time" control system experiences little
increase in its (currently near zero) error if you brake, but the
"distance to car in front" control system experiences a rapid and
large increase in error if you don't. If two systems have equal gain,
the one with greater error will have greater output. The foot goes on
the brake.

I understand, but I don't agree. Why do you say that the 'getting to
work on time' control system experiences little or no error when I act
in a way that insures that I will not get to work on time? Suppose no
car cuts in front of me. What happens then if my car brakes due to a
malfunction in the system? Does the 'getting to work on time' control
system experience little error?

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bruce Gregory (2003.12.08.0932)]

Bill Powers (2003.12.07.1847 MST)

Thanks, Bill. I truly appreciate the time and effort you put into
providing such a detailed and informative response.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bruce Gregory (2003.12.08.1044)]

Bill Williams 7 December 2003 6:14 P.M.

The first and I think most familiar example is one in which there is
excessive gain, or excessive loop delay resulting in oscilations. PIO
are, I understand, quite common in air to air refueling.

I'll bet!

But, on one occasion I observed a student at night become confused as
to which direction was up. There was a full moon and broken clouds.
By chance the full light of the moon fell on a lake which was highly
reflective on a still night. Seeing the reflection of the moon below
him on the lake, the student attempted to rotate the aircraft so that
it would be, in his perception, upright. As he was doing this the
student check the instruments and with his eyes off the out-side
illusion corrected the planes attitude with reference to the
instruments. Then the plane stablized in a genuinely upright position,
the student went back to looking outside and repeating the cycle. This
sequence repeated several times. Nothing wrong as far as I could tell
with the students loop gain, or phase delay.

I never had problems flying on instruments because I believed them
completely. Not always a good idea, I add. But the transition from
instruments to visual references can be mighty disorienting. It's not
surprising that most private pilot accidents occur at night when the
weather is less than great. I recall one night flight in bad weather I
took many years as a naive passenger with a co-worker who was getting
his instrument ticket. I _never_ would have done that had I known how
many things could go wrong. Occasionally, ignorance _is_ bliss. I agree
that your student seemed to be illustrating pure PIO's with no evidence
of loop gain or phase delay.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Rick Marken (2003.12.08.0850)]

Bruce Gregory (2003.12.08.0932)--

> Bill Powers (2003.12.07.1847 MST)

Thanks, Bill. I truly appreciate the time and effort you put into
providing such a detailed and informative response.

Thanks to Bill from me, too.

I hope you now have a little better understanding of how to use the principles of
hierarchical control, like those illustrated in my spreadsheet, to build working models of
actual behavior, like avoiding obstacles while driving to work. It can't be done using
recipes or verbalisms. It takes creativity informed by general scientific knowledge (like
the inverse square relationship between projected size and distance) and constrained by an
understanding of the principles of hierarchical control.

I hope that your appreciation of Bill's comments encourages you to reconsider some of the
comments you made about HPCT in the posts that started your little game [eg. Bruce Gregory
(2003.12.07.0905)], such as:

This [HPCT] process involves, as far as I can see, two mechanisms.
Reference levels established from above and same-level systems with
higher gain dominating systems with lower gain. Beyond that there
seems little more to say at this time.

Actually, there was quite a bit more to say -- especially correct stuff -- wasn't there?

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bruce Gregory (2003.12.08.1350)]

Rick Marken (2003.12.08.0850)

Actually, there was quite a bit more to say -- especially correct
stuff -- wasn't there?

Once again you are right! I'm continually amazed that someone who knows
as much as you do is so understanding and so generous with
encouragement.

I'll let you figure out whether the above is sincere or ironic. Most
people, I suspect, will have little difficulty figuring this out.

As someone is reputed to have said, "When you know Rick as we know him,
you will esteem him as we esteem him."

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

Bruce,

you say,

I _never_ would have done that [ridden with a low time pilot] had I known how >many things could go wrong.

I've made the same mistake.

And, you say,

I never had problems flying on instruments because I believed them
completely. Not always a good idea, I add.

Definitely not. I had a vacumn pump failure one night in instrument weather. The first indication of the failure was the artificial horizon indicating a smooth pull-up followed by a roll into a split-S. Now that I _didn't_ believe. I didn't have any real difficulty getting down, but I sure couldn't hold a heading very well with the magnetic compass.

If one is inclined to think about it, flying provides a good experimental lab to test out control theory explainations of behavior. When I was doing flight instruction, I found in impressive how much easier it made it for a student when I organized instruction in a way that was consistent with control theory. If I said, if effect, control for this perception, the effort and attention was directed to where it did the most good, rather to thinking about sequences of do this, do that, do the other thing, while control of situation was falling apart.

Bill Williams

[From Bruce Gregory (2003.12.08.1445)]

If one is inclined to think about it, flying provides a good
experimental lab to test out control theory explainations of behavior.
When I was doing flight instruction, I found in impressive how much
easier it made it for a student when I organized instruction in a way
that was consistent with control theory. If I said, if effect, control
for this perception, the effort and attention was directed to where it
did the most good, rather to thinking about sequences of do this, do
that, do the other thing, while control of situation was falling
apart.

Yes, I feel that way too. So much so that most of my flying during the
past few years has been done from the viewpoint of discovering the
perceptions I need to control, and when I need to control them.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

···

On Dec 8, 2003, at 2:12 PM, Williams, William D. wrote:

[From Bill Williams 8 December 2003 5:40 PM CST]

CSGfolk,

In what may be the end-game stage of a thread, the following was said by Bruce Gregory. Excuse me for including a full copy of the context in which the statement was made, but I think it is neccesary to supply a context.

  [From Bruce Gregory (2003.12.08.1350)]

  > Rick Marken (2003.12.08.0850)
  >
  >
  > Actually, there was quite a bit more to say -- especially correct
  > stuff -- wasn't there?

  Once again you are right! I'm continually amazed that someone who knows
  as much as you do is so understanding and so generous with
  encouragement.

  I'll let you figure out whether the above is sincere or ironic. Most
  people, I suspect, will have little difficulty figuring this out.

  As someone is reputed to have said, "When you know Rick as we know him,
  you will esteem him as we esteem him."

  Bruce Gregory

I am not in a good position to propose that ironists be shot. This might possibly result in a self-inflicted wound. And, I am on the whole inclined to encourage those who are critical of what may be mistaken claims made on behalf of PCT or HPCT. I do so in part _because_ I am aware that I am inclined myself to accept the applicablity of control theory in an uncritical, possibly simpllistic way.

I hope no one will suspect me of grinding any particular ax in asking the following question: Suppose a skeptic raises a question about a particular feature or set of features in a theory or system under consideration.
Suppose that the adherants to the system exert a considerable effort to demonstrate that the doubts are unfounded. Suppose that the defenders of the system are succucessful (by some standard) in demonstrating that the doubts raised can not be sustained. In such a situation is there somesort, or anysort at all, of obligation on the part of the doubter?

Bill Williams

[From Bruce Gregory (2003.12.08.1945)]

Bill Williams 8 December 2003 5:40 PM CST

I am not in a good position to propose that ironists be shot. This
might possibly result in a self-inflicted wound. And, I am on the
whole inclined to encourage those who are critical of what may be
mistaken claims made on behalf of PCT or HPCT. I do so in part
_because_ I am aware that I am inclined myself to accept the
applicablity of control theory in an uncritical, possibly simpllistic
way.

I hope no one will suspect me of grinding any particular ax in asking
the following question: Suppose a skeptic raises a question about a
particular feature or set of features in a theory or system under
consideration.
Suppose that the adherants to the system exert a considerable effort
to demonstrate that the doubts are unfounded. Suppose that the
defenders of the system are succucessful (by some standard) in
demonstrating that the doubts raised can not be sustained. In such a
situation is there somesort, or anysort at all, of obligation on the
part of the doubter?

May I ask whom you think was the "doubter"? I simply made a statement
and asked follow-up questions. Bill and Martin provided helpful and non
deprecatory responses. Rick repeated himself as he is wont to do. (Is
their any question about HPCT that isn't answered "in principle" by his
spreadsheet example?) His teaching style may have contributed to his
decision to change career, but I can't speak to this. If you read the
exchange carefully you will see that only Rick was confused and, as he
admitted, had no idea of what I was talking about. Neither Bill nor
Rick seemed to have the problem. I explicitly thanked them for their
thoughtful responses.

Of, I get it. You mean _Rick_ should apologize. Well, that's up to him
to decide.

Bruce Gregory

Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

Bruce,

Outstanding post. Just Outstanddding. You've exceded all my expectations.

Bill Williams

"We must always say what we mean, and mean what we say." anon

[From Bruce Gregory (2003.12.08.2032)]

···

On Dec 8, 2003, at 8:00 PM, Williams, William D. wrote:

Bruce,

Outstanding post. Just Outstanddding. You've exceded all my
expectations.

Bill Williams

"We must always say what we mean, and mean what we say." anon

Modesty (and good sense) prohibit me from asking exactly what those
expectations were...

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Rick Marken (2003.12.08.2200)]

Bruce Gregory (2003.12.08.1945) --

Rick repeated himself as he is wont to do... His teaching style may
have contributed to his decision to change career, but I can't speak
to this. If you read the exchange carefully you will see that only
Rick was confused and, as he admitted, had no idea of what I was
talking about.

Bruce Gregory (2003.12.08.2023) --

This [spreadsheet] model is an example of what physicists call a "toy
model." ..To the extent you are interested in modeling, Rick's
forthright statement might tell you something important. I leave that
for you to decide.

I have to admit that you are very good at what you do, Bruce.

After reading this even I think of Rick as a quarrelsome, confused
teacher wanna-be who thinks he has something important to say about PCT
because he once built a "toy model".

I guess you win after all. Congratulations.

But I still appreciate the nice blurb you wrote for the cover of _More
Mind Readings_. Thanks.

Best regards

Rick

···

--
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bruce Gregory (2003.12.09.0515)]

Rick Marken (2003.12.08.2200)

But I still appreciate the nice blurb you wrote for the cover of _More
Mind Readings_. Thanks.

_More Mind Readings_ was written by the Good Rick. I've always admired
him. By the way, the term "toy model" is not a pejorative in physics.
In fact, like your spreadsheet model, toy models play a valuable role
in studying very complex phenomena. Steve Weinberg considered his
original model of the electroweak interaction pretty much a toy model.
He was given the Nobel Prize for his efforts. Who knows?

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bill Powers (2003.12.09.0804 MST)]

Bruce Gregory (2003.12.09.0515)]

By the way, the term "toy model" is not a pejorative in physics.
In fact, like your spreadsheet model, toy models play a valuable role
in studying very complex phenomena. Steve Weinberg considered his
original model of the electroweak interaction pretty much a toy model.
He was given the Nobel Prize for his efforts. Who knows?

Gee, that almost sounded like "Sorry for the cruel remarks."

Best,

Bill P.