evidence for model-based control

[Hans Blom, 970930]

"A suitcase is as heavy as you think it is", was the title of the
article ("Een koffer is zo zwaar als je denkt". De Volkskrant, a
Dutch national newspaper, Saturday 20 September; http://www.
volkskrant.nl). "Huh?", I thought, "Crazy title. How could that be?"

"The Samsonites are the worst", the article began. "When the cargo
crew that load and unload the airplanes at Amsterdam Airport work
rapidly to service all those departing and arriving planes in time,
they have to estimate the weight of the suitcases on the basis of the
look of their exterior. But the hard plastic Samsonites look the
same, whether full or almost empty. That the loaders are unpleasantly
surprised when a suitcase suddenly turns out to be extra heavy is
unsurprising. But that they can hurt themselves when a suitcase is
empty is strange. Yet exactly that is possible: it is mainly the
loader's expectation about the weight of the burden that determines
how large the forces are that the lower back is exposed to".

The article summarizes the research and very recent PhD thesis of dr.
D. Commissaris, a "movement scientist" at Amsterdam Free University's
Spine Unit. This group researches the relationship between the forces
in and on the body and the stresses that the spine is exposed to.
Their goal is to arrive at insight in the origins of damage.

Their major measurement instruments are four video cameras, arranged
around a test platform. A computer acquires and processes the images.
In addition it calculates the accelerations of body parts. Because
the masses of head, trunk, arms and legs can be measured or
estimated, the forces that the different joints are subject to can be
calculated.

On the test platform, subjects move boxes whose volume and weight can
be manipulated independently: a large box can be light-weight,
whereas a small box can be surprisingly heavy. The surprise is even
greater when a subject starts out (is primed) with lifting a
16-kilogram box four times, and then lifts an equally large but far
lighter box. Measurements indicate that the dupe's spine is then
exposed to almost equally great forces as the spine of persons who
really do lift a 16-kilogram box. This effect was most pronounced
when lifting proceeded rapidly. People who want to work rapidly with
heavy loads, the study explains, anticipate. They use their own body
as a counterweight for the box: the upper body moves backward. If the
box is much lighter than expected, the spine is exposed to the body's
own counterweight. This unbalanced counterweight makes the subject
tend to fall backwards. In order to remain in balance, the center of
gravity of the body-box combination has to be rapidly repositioned
above the feet. If the box is much lighter than anticipated, this is
done by taking a step backwards.

Those rapid corrective movements are not without their risk: the
research indicates that an incorrect anticipation of the weight to be
lifted can lead to back problems. Correct anticipation therefore
seems to diminish the possibility of spine injury.

What is the utility of this newly gained knowledge? The researcher
isn't quite certain about that yet. One prospect that she mentions is
to present the weight of the box to the loader before he lifts the
box. Including the Samsonites, of course...

[end of paraphrase]

A question: If incorrect anticipation can be damaging to our health,
shouldn't we, too, study this bizarre phenomenon?

A second question: We are usually quite surprised -- even "hurt" --
when our anticipation is flagrantly incorrect. Does that indicate
that normally we are pretty good anticipators?

A third question: Would this type of "modeling" be specific to us
humans only, and then only to the scientists amongst us, as Rick
suggests? Or is it far more likely that we can expect this and
similar types of anticipation (expectation primed by previous
experience) in a much larger class of (higher) animals?

And a final question: The article uses the word "dupe" -- even in
this test setting -- for somebody who is "hurt" by his own false
expectations. Is that fair?

Greetings,

Hans

[From Bruce Gregory (970930.1035 EDT)]

Hans Blom, 970930

A question: If incorrect anticipation can be damaging to our health,
shouldn't we, too, study this bizarre phenomenon?

I don't think anticipation is bizarre. On the other hand, it is
not control either.

A second question: We are usually quite surprised -- even "hurt" --
when our anticipation is flagrantly incorrect. Does that indicate
that normally we are pretty good anticipators?

I sure hope so. Animals that do not learn from experience are
probably not long for this world.

A third question: Would this type of "modeling" be specific to us
humans only, and then only to the scientists amongst us, as Rick
suggests? Or is it far more likely that we can expect this and
similar types of anticipation (expectation primed by previous
experience) in a much larger class of (higher) animals?

Again, anticipation is not control. When I put their bowls on
the table prior to filling them, my dogs anticipate that they
will be fed. They don't control their eating on the basis of
this anticipation, however.

Bruce

[From Bill Powers (970930.0832 MDT)]

Hans Blom, 970930 --

"A suitcase is as heavy as you think it is", was the title of the
article ("Een koffer is zo zwaar als je denkt". De Volkskrant, a
Dutch national newspaper, Saturday 20 September; http://www.
volkskrant.nl). "Huh?", I thought, "Crazy title. How could that be?"

An excellent potential example of model-based control, Hans. It certainly
looks as though some people picking up suitcases prepare for the amount of
effort they believe will be needed before they actually pick them up, and
that they base this preparation on recent experience with similar suitcases.

One explanation for this observation is that some people have in their
heads, or acquire, a literal simulation of the physical environment,
including the suitcases and all the laws of physiology and mechanics
involved in picking them up, and compute the bodily orientations and muscle
outputs required to make the suitcases behave in a certain way. I tend to
disbelieve this explanation because of the kind and complexity of the
computations required, but it is a possibility until disproven.

Another explanation is that there is a hierarchy of control systems, the
higher ones adjusting the reference signals for the lower ones, and
controlling more general or abstract perceptions than the lower ones
control. Without trying to be more specific about the levels, we can guess
that the higher systems can select different configurations of the lower
systems according to past experience, for example offsetting the reference
signals for maintaining a vertical position just as the load is grasped and
lifting starts, so that balance will be maintained. On the other hand, it
might be that the balance-control systems can be fast enough to adjust the
bodily lean _as_ the force on the load is increased, to prevent balance
from being disturbed. In the latter case, we would not expect the
occurrance of an empty suitcase to cause any problems, as it does not for
most workers. The workers who have the most severe problems would be the
ones who try to predict what the weight of the suitcase will be, and adjust
their efforts as if the prediction were correct. When the prediction is
incorrect, their automatic control systems will experience large unexpected
disturbances, and in trying to keep from falling over, these workers can
produce enough muscle output to injure themselves.

These alternatives are, of course, only proposals until tested
experimentally. The work you cite shows that there is a phenomenon that
needs experimental investigation designed to test different models and
eliminate (one hopes) all but one of them.

Best,

Bill P.

···

"The Samsonites are the worst", the article began. "When the cargo
crew that load and unload the airplanes at Amsterdam Airport work
rapidly to service all those departing and arriving planes in time,
they have to estimate the weight of the suitcases on the basis of the
look of their exterior. But the hard plastic Samsonites look the
same, whether full or almost empty. That the loaders are unpleasantly
surprised when a suitcase suddenly turns out to be extra heavy is
unsurprising. But that they can hurt themselves when a suitcase is
empty is strange. Yet exactly that is possible: it is mainly the
loader's expectation about the weight of the burden that determines
how large the forces are that the lower back is exposed to".

The article summarizes the research and very recent PhD thesis of dr.
D. Commissaris, a "movement scientist" at Amsterdam Free University's
Spine Unit. This group researches the relationship between the forces
in and on the body and the stresses that the spine is exposed to.
Their goal is to arrive at insight in the origins of damage.

Their major measurement instruments are four video cameras, arranged
around a test platform. A computer acquires and processes the images.
In addition it calculates the accelerations of body parts. Because
the masses of head, trunk, arms and legs can be measured or
estimated, the forces that the different joints are subject to can be
calculated.

On the test platform, subjects move boxes whose volume and weight can
be manipulated independently: a large box can be light-weight,
whereas a small box can be surprisingly heavy. The surprise is even
greater when a subject starts out (is primed) with lifting a
16-kilogram box four times, and then lifts an equally large but far
lighter box. Measurements indicate that the dupe's spine is then
exposed to almost equally great forces as the spine of persons who
really do lift a 16-kilogram box. This effect was most pronounced
when lifting proceeded rapidly. People who want to work rapidly with
heavy loads, the study explains, anticipate. They use their own body
as a counterweight for the box: the upper body moves backward. If the
box is much lighter than expected, the spine is exposed to the body's
own counterweight. This unbalanced counterweight makes the subject
tend to fall backwards. In order to remain in balance, the center of
gravity of the body-box combination has to be rapidly repositioned
above the feet. If the box is much lighter than anticipated, this is
done by taking a step backwards.

Those rapid corrective movements are not without their risk: the
research indicates that an incorrect anticipation of the weight to be
lifted can lead to back problems. Correct anticipation therefore
seems to diminish the possibility of spine injury.

What is the utility of this newly gained knowledge? The researcher
isn't quite certain about that yet. One prospect that she mentions is
to present the weight of the box to the loader before he lifts the
box. Including the Samsonites, of course...

[end of paraphrase]

A question: If incorrect anticipation can be damaging to our health,
shouldn't we, too, study this bizarre phenomenon?

A second question: We are usually quite surprised -- even "hurt" --
when our anticipation is flagrantly incorrect. Does that indicate
that normally we are pretty good anticipators?

A third question: Would this type of "modeling" be specific to us
humans only, and then only to the scientists amongst us, as Rick
suggests? Or is it far more likely that we can expect this and
similar types of anticipation (expectation primed by previous
experience) in a much larger class of (higher) animals?

And a final question: The article uses the word "dupe" -- even in
this test setting -- for somebody who is "hurt" by his own false
expectations. Is that fair?

Greetings,

Hans

[Hans Blom, 971001b]

(Bruce Gregory (970930.1035 EDT))

I don't think anticipation is bizarre. On the other hand, it is not
control either.
...
Again, anticipation is not control.

I find discussions of what control _is_ entirely fruitless. To me,
"control" is an abstract linguistic concept in a huge "semantic
network" of other linguistic concepts. In such a dictionary-like
network, the "meaning" of a concept is defined in terms of the
"meanings" of the other concepts that it is linked to and the
"meanings" of the links themselves. Thus the "meaning" of something
is dependent on -- and defined by -- whatever other notions we happen
to entertain. That's why we -- and even dictionaries, to a lesser
degree -- never agree ;-).

Because "anticipation" and "control" are different words, one is not
the other: they have different connotations (links), and different
ones for different individuals at that.

But that's not my issue at all. What I point at is that anticipation
is (or "means"; how to avoid that damned word "is"? :wink: an important
contributing factor in how perceptions are "interpreted" into
actions. For me, the notion "control" means: how are perceptions,
given references, translated into adequate actions, where adequacy is
defined in terms of some distance measure between perception and
reference. As you see, the notions "control" and "anticipation" are
extremely close together -- for me...

When I put their bowls on the table prior to filling them, my dogs
anticipate that they will be fed. They don't control their eating on
the basis of this anticipation, however.

A great deal of the behavior that follows their perception of the
bowls and the concomitant anticipation can be "explained" by their
having that anticipation -- in the sense that "naive" dogs do not
have that anticipation and behave differently. That you can say "my
dogs anticipate that they will be fed" must rest on observable and
fairly reproducible behavior. Those bowls have acquired a certain
"meaning" for the dogs; seeing them being put on the table has become
a reliable predictor that they will eat a few moments later. If you
went through exactly the same actions with, say, flower pots rather
than bowls, or if you put the bowl in the sink rather than on the
table, the dogs's actions would be entirely different, I presume.
Easy enough to test! But don't create an unexpected sequence too
often: that will influence their expectations!

Normally -- even though you _can_ frustrate their expectation (but
I'm sure you're far too nice to do that :wink: -- their eating, the
ultimate thing of interest, is rather reliably predictable by their
perception of you putting the bowls on the table. If it weren't, they
would not have the same anticipation.

Anyway, anticipation denotes a certain (subjective) certainty that
something is about to occur; anticipation is (more or less reliable)
prediction. A "controller" could -- if smart enough -- use that extra
(subjective) "knowledge" to prepare itself for subsequent action.

I could go a lot further and say that our anticipations _are_ our
knowledge about the world. More properly formulated: that the (my?)
"distance" between the notions "anticipation" and "knowledge" is
small -- or even negligible. I could also point out that "learning"
is the process of acquiring the (a?) correct and complete set of
anticipations. Maybe another time...

Summarizing, anticipations influence actions. In a mechanistic model
the controller may "contain" an "anticipator" module. And it is this
that establishes the link between (my model of) anticipation and how
model-based controllers are often actually implemented.

This requires further thought: does the notion "naive", which I used
above, normally refer to persons who do not anticipate consequences
that most of us do anticipate?

Greetings,

Hans

[Vladimir Jojic 971001.1445 MET]

Hans Blom, 971001d

(Bill Powers (970930.0832 MDT))

>>"A suitcase is as heavy as you think it is", was the title of the
>>article ("Een koffer is zo zwaar als je denkt". De Volkskrant, a
>>Dutch national newspaper, Saturday 20 September; http://www.
>>volkskrant.nl).

>An excellent potential example of model-based control, Hans.

I thought so as well.

I love it, too ... if this fact makes any difference to you :slight_smile:

[and now: SNIP]

What do you mean by "tested experimentally"? I can construct (and I
did already present) a simulation that shows anticipatory behavior
very much like the subjects in this test, including incorrect actions
if the model's assumptions/anticipations are incorrect. How would you
include anticipations in a PCT model?

May I propose something that is not in (H)PCT theory (but is related to
HPCT), that would solve this problem easily.

We could have a control system, that controls variable called "lack of
perceptions", and creates an error signal, and passes it to the lower
level control system, as a imagined perception of real world ... and
you have a weight of a suitcase, and you know that it is imaginary ...

You can call the whole mechanism, a model-based control, because that's
what is does ... Any problems?

There you go, you have a RHPCT (recursive HPCT) ...

Uh, oh, this starts to look just like neural networks ... Questions:

Can any group of N neurons be represented with at most N control systems?

Can any group of N (<10, for starters) neurons (organized, anyway you like
it) be approximated with just one control system?

I think that it can be done, and would love to prove it (and will, some
time in the future) ...

Greetings,

Hans

Thanks for asking such a nice question, :slight_smile:
Vladimir

[Hans Blom, 971001d]

(Bill Powers (970930.0832 MDT))

"A suitcase is as heavy as you think it is", was the title of the
article ("Een koffer is zo zwaar als je denkt". De Volkskrant, a
Dutch national newspaper, Saturday 20 September; http://www.
volkskrant.nl).

An excellent potential example of model-based control, Hans.

I thought so as well.

It certainly looks as though some people picking up suitcases
prepare for the amount of effort they believe will be needed before
they actually pick them up, and that they base this preparation on
recent experience with similar suitcases.

Certainly looks that way.

One explanation for this observation is that some people have in
their heads, or acquire, a literal simulation of the physical
environment, including the suitcases and all the laws of physiology
and mechanics involved in picking them up, and compute the bodily
orientations and muscle outputs required to make the suitcases
behave in a certain way. I tend to disbelieve this explanation
because of the kind and complexity of the computations required, but
it is a possibility until disproven.

Yes, I know that this is your position. I, on the other hand, believe
that the computing power of a human brain is vastly superior to even
our most powerful supercomputers. And I _know_ that even a simple PC
can maintain a fairly complex model that is part of a model-based
controller accurately in real time.

The utterly complex model that you think of "a literal simulation of
the physical environment, including the suitcases and all the laws of
physiology and mechanics involved in picking them up, and compute the
bodily orientations and muscle outputs required to make the suitcases
behave in a certain way" isn't required at all here: just a simple
conversion (scaling) factor between size and weight will do. And its
computation would proceed using correlation, something that humans
are well capable of, if only approximately.

What is at issue in this "excellent potential" example is the fact
that a heavy weight must be counterbalanced by the body, but that no
perception _of the weight_ is available at the time the body prepares
itself. What is _available_ is a perception of size; what is
_required_ (as the feedback variable) is weight. The problem is now
how to convert the size-perception into a weight-"imagination". This
problem is simple if a one-to-one correspondence between size and
weight exists and is discovered. In that case, a simple
multiplication in (or after) the perceptual input function would be
all that is required to solve the problem; measuring the size would
provide an "indirect measurement" of the weight. In the example,
people did this rapidly: after only 4 suitcases a correct model was
in place.

The additional -- unsolvable (!) -- problem arises when the
multiplication is by the incorrect value. This happens when the
assumption/anticipation of the constant relationship between size and
weight is violated: in a suitcase that is "unexpectedly" light or
heavy. In such cases, no correct "indirect measurement" is possible
and incorrect anticipation (i.e. relying on the correctness of the
"indirect measurement" when it is incorrect) leads to suboptimal and
even harmful actions.

Why is the problem unsolvable? Because _weight_ must be perceived,
whereas only _size_ is available. The solution suggested was to
present the weight to the subjects (using scales?). That ought to
work. That's also the solution that you offer, essentially: have the
subjects pick up the suitcases so slowly that their weight is
perceived and can be used in the subsequent actions:

On the other hand, it might be that the balance-control systems can
be fast enough to adjust the bodily lean _as_ the force on the load
is increased, to prevent balance from being disturbed.

But I predict that this will not work in the normal circumstances,
where the workers are required to work so rapidly that anticipation
is _required_. Speed kills, they say. Here it only harms.

In the latter case, we would not expect the occurrance of an empty
suitcase to cause any problems, as it does not for most workers.

That's what the article mentioned: the problem is the more serious
the more rapidly the subjects work.

The workers who have the most severe problems would be the ones who
try to predict what the weight of the suitcase will be, and adjust
their efforts as if the prediction were correct. When the prediction
is incorrect, their automatic control systems will experience large
unexpected disturbances, and in trying to keep from falling over,
these workers can produce enough muscle output to injure themselves.

Correct. Model-based control is not a cure-all. It can only work if a
good model (here just a multiplication factor) can be constructed.

These alternatives are, of course, only proposals until tested
experimentally. The work you cite shows that there is a phenomenon
that needs experimental investigation designed to test different
models and eliminate (one hopes) all but one of them.

What do you mean by "tested experimentally"? I can construct (and I
did already present) a simulation that shows anticipatory behavior
very much like the subjects in this test, including incorrect actions
if the model's assumptions/anticipations are incorrect. How would you
include anticipations in a PCT model?

Greetings,

Hans

[From Bruce Gregory (971001.0945 EDT)]

Hans Blom, 971001b

(Bruce Gregory (970930.1035 EDT))

>I don't think anticipation is bizarre. On the other hand, it is not
>control either.
>...
>Again, anticipation is not control.

I find discussions of what control _is_ entirely fruitless. To me,
"control" is an abstract linguistic concept in a huge "semantic
network" of other linguistic concepts. In such a dictionary-like
network, the "meaning" of a concept is defined in terms of the
"meanings" of the other concepts that it is linked to and the
"meanings" of the links themselves. Thus the "meaning" of something
is dependent on -- and defined by -- whatever other notions we happen
to entertain. That's why we -- and even dictionaries, to a lesser
degree -- never agree ;-).

I'm sure this is true. However, control has a very specific
meaning in PCT. I believe that those who understand PCT have
little difficulty on agreeing as to whether control is involved
in any given situation. (Or rather how to test whether control
is involved.) Since you do not share this understanding, it is
not surprising that you often find yourself speaking at cross
purposes to Bill and Rick.

Because "anticipation" and "control" are different words, one is not
the other: they have different connotations (links), and different
ones for different individuals at that.

Again I think the framework of PCT provides a technical meaning
for anticipation (although the term is not often used).

But that's not my issue at all. What I point at is that anticipation
is (or "means"; how to avoid that damned word "is"? :wink: an important
contributing factor in how perceptions are "interpreted" into
actions. For me, the notion "control" means: how are perceptions,
given references, translated into adequate actions, where adequacy is
defined in terms of some distance measure between perception and
reference. As you see, the notions "control" and "anticipation" are
extremely close together -- for me...

Indeed. But not for PCT.

Anyway, anticipation denotes a certain (subjective) certainty that
something is about to occur; anticipation is (more or less reliable)
prediction. A "controller" could -- if smart enough -- use that extra
(subjective) "knowledge" to prepare itself for subsequent action.

Indeed. The process involves memory and imagination. Both have
well defined roles in PCT.

Bruce

[From Bill Powers (971001.1050 MDT)]

Hans Blom, 971001d--

I tend to disbelieve this (literal model-based) explanation

because of the kind and complexity of the computations required, but
it is a possibility until disproven.

Yes, I know that this is your position. I, on the other hand, believe
that the computing power of a human brain is vastly superior to even
our most powerful supercomputers. And I _know_ that even a simple PC
can maintain a fairly complex model that is part of a model-based
controller accurately in real time.

This makes your proposal a matter of faith. What if the brain's computing
power turns out not to be as vast and precise as you assume? Or suppose its
superiority does not lie in its ability to do symbolic or numeric
calculations very rapidly? How fast do you think you could execute the
calculations in your PC model without the aid of a computer? I think it
would take you hours to do a single iteration with the required accuracy,
and you would probably make many mistakes -- especially if you weren't
allowed to write down the intermediate results with pen and paper, but had
to hold them in your human memory.

The utterly complex model that you think of "a literal simulation of
the physical environment, including the suitcases and all the laws of
physiology and mechanics involved in picking them up, and compute the
bodily orientations and muscle outputs required to make the suitcases
behave in a certain way" isn't required at all here: just a simple
conversion (scaling) factor between size and weight will do. And its
computation would proceed using correlation, something that humans
are well capable of, if only approximately.

You obviously havn't given much thought to what would actually be required.

On the other hand, it might be that the balance-control systems can
be fast enough to adjust the bodily lean _as_ the force on the load
is increased, to prevent balance from being disturbed.

But I predict that this will not work in the normal circumstances,
where the workers are required to work so rapidly that anticipation
is _required_. Speed kills, they say. Here it only harms.

The workers, if required to work faster than their control systems can
actually work, will make mistakes and injure themselves. The obvious
solution, and one that most experienced workers use, is to lift slowly
enough to allow the balance to be adjusted as the weight is taken up by the
arms. Then one is never surprised by a wrong prediction. And we're talking
about the difference between starting the lift instantly and starting it a
few tenths of a second later. Starting it slightly later would clearly
result in an increase in productivity, because the worker would pick up the
package on the first try, would never have to chase down light packages
that had been tossed away, and would never require time off to recover from
injuries.

In the latter case, we would not expect the occurrance of an empty
suitcase to cause any problems, as it does not for most workers.

That's what the article mentioned: the problem is the more serious
the more rapidly the subjects work.

But they don't actually work "more rapidly." They're using a strategy that
ends up slowing them down.

The workers who have the most severe problems would be the ones who
try to predict what the weight of the suitcase will be, and adjust
their efforts as if the prediction were correct. When the prediction
is incorrect, their automatic control systems will experience large
unexpected disturbances, and in trying to keep from falling over,
these workers can produce enough muscle output to injure themselves.

Correct. Model-based control is not a cure-all. It can only work if a
good model (here just a multiplication factor) can be constructed.

It is not just a multiplication factor. If you can't see that, you just
don't understand how to model real systems.

These alternatives are, of course, only proposals until tested
experimentally. The work you cite shows that there is a phenomenon
that needs experimental investigation designed to test different
models and eliminate (one hopes) all but one of them.

What do you mean by "tested experimentally"? I can construct (and I
did already present) a simulation that shows anticipatory behavior
very much like the subjects in this test, including incorrect actions
if the model's assumptions/anticipations are incorrect. How would you
include anticipations in a PCT model?

Simulations are not experimental tests.

Best,

Bill P.

[Hans Blom, 971007]

(Bill Powers (971001.1050 MDT))

I ... believe that the computing power of a human brain is vastly
superior to even our most powerful supercomputers. And I _know_
that even a simple PC can maintain a fairly complex model that is
part of a model-based controller accurately in real time.

This makes your proposal a matter of faith.

This is silly. You may assume that _all_ my proposals (and all my
beliefs and convictions) are a matter of faith. But what does that
have to do with the matter we were discussing?

What if the brain's computing power turns out not to be as vast and
precise as you assume?

Think number of elementary operations per second and degree of
parallelism.

The utterly complex model that you think of "a literal simulation
of the physical environment, including the suitcases and all the
laws of physiology and mechanics involved in picking them up, and
compute the bodily orientations and muscle outputs required to make
the suitcases behave in a certain way" isn't required at all here:
just a simple conversion (scaling) factor between size and weight
will do. And its computation would proceed using correlation,
something that humans are well capable of, if only approximately.

You obviously havn't given much thought to what would actually be
required.

Let me rephrase: over and beyond whatever complexity a PCT-type
controller would require in this situation, a model-based controller
would require only _one_ extra parameter/degree of freedom, a simple
multiplication factor relating size and weight, as well as a means to
compute it.

The workers, if required to work faster than their control systems
can actually work, will make mistakes and injure themselves. The
obvious solution, and one that most experienced workers use, is to
lift slowly enough to allow the balance to be adjusted as the weight
is taken up by the arms. Then one is never surprised by a wrong
prediction. And we're talking about the difference between starting
the lift instantly and starting it a few tenths of a second later.

Why don't the workers think of that? It's so simple! :wink:

But they don't actually work "more rapidly." They're using a
strategy that ends up slowing them down.

Do you want me to look up the Amsterdam Spine Unit's email address
for you, so that you can provide them with this solution to their
problem?

Greetings,

Hans

[From Rick Marken (971007.1320)]

Bill Powers (971001.1050 MDT)--

This makes your proposal a matter of faith.

Hans Blom (971007) --

This is silly. You may assume that _all_ my proposals (and all my
beliefs and convictions) are a matter of faith. But what does that
have to do with the matter we were discussing?

The matter you were discussing was the model-based control model
of human behavior. You are proposing a model of behavior but you
propose no test that compares the behavior of the model to the
behavior of the system modeled (a human). The tests of your model
that you have described show only that your model does, in fact,
behave. This is like testing the geocentric model of the universe
by showing that the model does, in fact, behave. Such a test doesn't
tell us whether the model actually behaves like the system modeled
(be it human or solar system).

I have presented evidence that the model your propose does not
behave like a human. You reject this evidence because you don't
seem to like it. This is how people of faith deal with evidence;
they reject it if it conflicts with their faith. Since you reject
evidence that conflicts with your faith in the model-based control
model of human behavior, your continued advocacy of that model is
clearly a matter of faith.

Bill Powers --

The workers, if required to work faster than their control systems
can actually work, will make mistakes and injure themselves. The
obvious solution, and one that most experienced workers use, is to
lift slowly enough to allow the balance to be adjusted as the weight
is taken up by the arms. Then one is never surprised by a wrong
prediction. And we're talking about the difference between starting
the lift instantly and starting it a few tenths of a second later.

Hans Blom --

Why don't the workers think of that? It's so simple! :wink:

Most probably do think of it. Or they are unaware of the fact that
they are actually reducing their productivity by adopting the "lift
hard, adjust later" strategy because they _imagine_ that this strategy
is actually increasing their productivity. There are many situations
where people control the "wrong" perception because they cannot
perceive the result (increased productivity in this case) that
controlling this perception is supposed to produce.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[Hans Blom, 971008]

(Rick Marken (971007.1320))

There are many situations where people control the "wrong"
perception because they cannot perceive the result (increased
productivity in this case) that controlling this perception is
supposed to produce.

To quote someone: isn't this evidence that people are not PCT-like
controllers? :wink:

Greetings

Hans

[From Bill Powers (971008/0934 MDT)]

Hans Blom, 971007--

I ... believe that the computing power of a human brain is vastly
superior to even our most powerful supercomputers. And I _know_
that even a simple PC can maintain a fairly complex model that is
part of a model-based controller accurately in real time.

This makes your proposal a matter of faith.

This is silly. You may assume that _all_ my proposals (and all my
beliefs and convictions) are a matter of faith. But what does that
have to do with the matter we were discussing?

I think you give up too easily and fall back on faith. There is such a
thing as a reasoned argument based on observations and premises. The fact
that at _some_ level we take things on faith (such as nature's general
consistency) doesn't mean that we should simply give up on science.

What if the brain's computing power turns out not to be as vast and
precise as you assume?

Think number of elementary operations per second and degree of
parallelism.

I don't know anyone who can think about such things in meaningful terms.
I'd rather think "how many equations can you solve simultaneously in your
head, and in how many variables?" Any computer can do better -- if you're
taliing about literally doing mathematical computations, in symbols or
numbers..

Suitcases:

You obviously havn't given much thought to what would actually be
required.

Let me rephrase: over and beyond whatever complexity a PCT-type
controller would require in this situation, a model-based controller
would require only _one_ extra parameter/degree of freedom, a simple
multiplication factor relating size and weight, as well as a means to
compute it.

Nonsense. The model-based controller requires a complete model of muscle
function, a complete model of the physics of the limbs and objects to be
moved and the environment with which they interact, and even a model of its
own perceptual system -- in addition to the actual muscles, limbs, objects,
and perceptual system. The PCT model requires only the muscles, limbs,
objects, and perceptual system.

The workers, if required to work faster than their control systems
can actually work, will make mistakes and injure themselves. The
obvious solution, and one that most experienced workers use, is to
lift slowly enough to allow the balance to be adjusted as the weight
is taken up by the arms. Then one is never surprised by a wrong
prediction. And we're talking about the difference between starting
the lift instantly and starting it a few tenths of a second later.

Why don't the workers think of that? It's so simple! :wink:

But they do! Most people who are fooled once by the empty suitcase or
milk-bottle are not fooled a second time. They give up trying to anticipate
what can't be anticipated, and use present-time feedback control.

But they don't actually work "more rapidly." They're using a
strategy that ends up slowing them down.

Do you want me to look up the Amsterdam Spine Unit's email address
for you, so that you can provide them with this solution to their
problem?

What fraction of all workers does the spine unit encounter? It seems to me
that there is a powerful selection factor working in their observations:
they get the people who think they can plan their actions and then just
carry them out, which obviously, and painfully, doesn't work. What do they
have to say about all the workers they never see?

If I thought the Spine Unit would pay any attention, I might take you up on
your offer. One could suggest a simple training program that ought to
vastly decrease the number of patients they get. Do you think they'd try it
out?

Best,

Bill P.

[From Rick Marken (971008.1910)]

Me:

>There are many situations where people control the "wrong"
>perception because they cannot perceive the result (increased
>productivity in this case) that controlling this perception is
>supposed to produce.

Hans Blom (971008) --

To quote someone: isn't this evidence that people are not PCT-like
controllers? :wink:

Not really. People are just using PCT control to control perceptions
that they can have only in imagination. So, for example, some people
control what they eat (which foods) not just to control for
perceptions of hunger and taste but also to control for imagined
perceptions of the state of their cardio-vascular system. This
is the kind of "model-based" control (the imagined state of
the cardio-vascular system being the "model") that fits rather
nicely into PCT. The "wrong" kind of model-based control (your
kind) is the kind where control is implemented by the calculation
of outputs based on a model of the physical relationship between
those outputs and the controlled variable.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[Hans Blom, 971013]

(Bill Powers (971008/0934 MDT))

I'm occasionally utterly surprised about the turns our discussions
take. This, for instance:

This is silly. You may assume that _all_ my proposals (and all my
beliefs and convictions) are a matter of faith. But what does that
have to do with the matter we were discussing?

I think you give up too easily and fall back on faith. There is such
a thing as a reasoned argument based on observations and premises.
The fact that at _some_ level we take things on faith (such as
nature's general consistency) doesn't mean that we should simply
give up on science.

Of course not. I have a great deal of faith in science ;-).

What if the brain's computing power turns out not to be as vast
and precise as you assume?

Think number of elementary operations per second and degree of
parallelism.

I don't know anyone who can think about such things in meaningful
terms.

I do. But we'd better stop this nonsense. Our internal models are far
too different to have a meaningful exchange on these matters, it
appears.

Greetings,

Hans