An experiment

[Martin Taylor 991116 09:33]

Unless and until Bill or someone chooses to comment on my description of
the interplay of digital and analogue control systems, I have retired once
again from talking about the architecture of the hierarchy. But I thought
that CSGnet might be interested in an experiment that is to start
shortly.

I discussed this experiment with Rick Marken some months ago, so long ago
that I imagine he thought it had been forgotten. But it hasn't. The delay
has been because programming (by a contractor) has been very slow. But
it is now ready to go, and probably will start next week, so here is the
description.

At the Defence and Civil Institute of Encironmental Medicine, where I
worked until I retired, and where I still work on a volunteer basis, there
has been a long-standing interest in the effects of sleep deprivation.
In 1994 there was a study of the effects of a couple of drugs that were
supposed to mitigate the effects of sleep deprivation. Subjects were
kept working continuously except for a 15 minute break every 2 hours
over three days and two nights, doing a wide variety of tasks. One of
the tasks was a series of different kinds of tracking, programmed by Bill
Powers.

In that study, there were tasks of what seemed like very different
levels of mental complexity, from simple tracking to a complicated
problem in which one person had to describe to another a path on a map
when the base maps of the two people were different. The impressions
the analysts got after all the tasks had been analyzed was that the more
complex the task, the less it was affected by sleep loss. This is very
counter-intuitive. Most of us feel that what we can't do when we lack
sleep is "think straight."

But there is an alternate possibility (I won't say "explanation").
Often, in sleep-deprivation studies, experimenters cannot find any
effect of sleep loss. Such studies usually just keep the subjects
awake, and every so often the subjects are asked to do a set of
test tasks. The subjects seem to be able to get "up" for these short
testing sessions. The difference in the DCIEM studies is that the
subjects are kept working essentially all the time, so there is no
real opportunity to conserve their resources and to get "up" for the
testing session. But still they may mentally highlight the more complex
and interesting tasks, and get "up" for those short tests. (And don't
ask me what it means, theoretically, to get "up" for a task. It's a
well reported subjective phenomenon, and corresponds to performance
measures, but how it happens is a mystery to me).

Presently there is another ongoing study of sleep deprivation, this
time targeted at the performance of teams (no drugs are involved),
but most of the tasks are for individuals to perform alone. The
subjects will lose only one night of sleep this time. I had the
opportunity include an experiment, and after discussion with Rick I
proposed one that involves different levels of perceptual control. I
was restricted to very short blocks of time, so the experiment is not
as I would wish it to be, but here it is, anyway. And don't blame Rick
for the details--I consulted only on the main idea, and the details
have changed since we talked.

There are six different pursuit tracking tasks, three involving tracking
a changing number, the other three tracking the changing length of a bar.
The three are at different levels of complexity, the simplest number
task being identical to one of the tracking tasks Bill provided for the
1994 study.

All the displays have in common that something controlled entirely by
the computer is displayed above something (a number or a horizontal bar)
determined entirely by the subject's movement of the mouse. At the
simplest complexity level the subject is asked to keep the lower number
or horizontal bar equal to the upper number or vertical bar (yes, the
"horizontal-vertical illusion" comes into play). At the second complexity
level, the subject is asked to keep the number either 5 less or 5 more
than the computer-controlled number, or to make the horizontal bar
longer or shorter than the computer-varied one by some fixed length
that is also shown on the screen. At the third level, the subject is
asked to make the number or bar equal to a varying sum or difference of
the computer-controlled numbers--that is, something like 45 - 3 might
be displayed, which changes to 44 -3, 43 - 2, 42 - 1, 41 + 0, 42 + 1...
both numbers varying smoothly but randomly up and down. In the case of
the bar-length task, two varying-length vertical bars are displayed,
that bust be added or subtracted to make the length that the subject
must match with the horizontal bar.

Here are ASCII representations of typical screens for a couple of the
tasks--mid-level number, and most complex line-length:

       44 | _ keep the lower line
                                   > - | equal to the sum or
       38 | difference of the
                                   > upper two lines.
keep the lower number -
5 less than the upper |--------------

(In the line-legth screen, each bar is about 1 cm thick, and has a short
cross-bar at its base, which is at the top of the right-hand line when
it must be subtracted from the "main" line).

In the 1994 study, the control model that best fit the data was one
earlier suggested by Rick, that involved a predictive increment to the
reference value (add some constant times the derivative of the perceptual
signal to the reference value). Using that model, I found that subjects
who were not given the anti-sleep-loss drugs tended to give more weight
to the prediction component as the sleep loss got worse. So one of the
questions about the new study will be whether this finding holds up again.

But the main question will be whether sleep loss will differentially
affect the model parameters (gain, lag, and predictive weight) at the
different levels of task complexity.

The study as a whole (not just my bit of it) involves 16 teams of 4
subjects each, or 64 subjects. Because of the slowness of the programming,
half of it has been completed, but I should be able to run the tracking
study on 32 subjects, each of whom will do three 50-second tracks every
two hours. Every four hours, then, each subject will do all six different
conditions, which means each condition will be done eight times. It's
not enough, really, but it's better than not doing it at all, and we
might get some interesting results out of it. I don't think subjects
should get "up" differentially for these tasks, since what they have
do do is the same at each complexity level, and the three 50-second
tracks are embedded in what is called a "psychomotor battery" (of tasks
designed with no thought of PCT).

Anyway, with all the comments about "why don't people do experiments" I
thought some of you might be interested in one experiment that is pure
PCT. I'll let you know how it comes out, if I get anything. But already
there's one interesting tidbit based on my pilot settings of bandwidth
and range parameters. I used people who are experimenters in the study
to try to set the parameters to levels that the subjects are likely
to find challenging but not frustratingly difficult. I found that in
the bar-length set of conditions I could use the same speed and range
parameters for the "main" compter-varied bar for all the complexity
levels, whereas for the number conditions I had to slow and limit the
range of the "main" number quite drastically for each level of increased
complexity.

We shall see what happens. It's hard to get subjects, and it is quite
probable that the experiment will continue well into the New Year, or
even into spring, if we go by how long it took to get the first 8 teams
of subjects.

Martin

[from Jeff Vancouver 991116. 15:38]

[Martin Taylor 991116 09:33]

Martin,

Is the 1994 study written up? What is this "predictive increment" to which
you speak?

I am gearing up to do the tracking experiment, which had been called the
"Vancouver study" on CSGnet about 3 years ago (you think your lag time is
long). It sounds like that 1994 study might be highly relevant.

Jeff

[From Bill Powers (991116.1208 MDT)]

Unless and until Bill or someone chooses to comment on my description of
the interplay of digital and analogue control systems, I have retired once
again from talking about the architecture of the hierarchy. But I thought
that CSGnet might be interested in an experiment that is to start
shortly.

I did comment on it, but have received no answer to my last post on the
subject, in which I showed that your proposal involves a problem with
level-skipping in setting reference signals in the middle of the hierarchy.
The higher analog levels would try to counteract the effects. You said that
there were no higher _digital_ levels to cause such a problem, but I
pointed out that it is the higher _analog_ levels we have to worry about.

Your experiments look interesting. I look forward to seeing the results.

Best,

Bill P.

[Martin Taylor 991116 23:42]

[from Jeff Vancouver 991116. 15:38]

[Martin Taylor 991116 09:33]

Martin,

Is the 1994 study written up? What is this "predictive increment" to which
you speak?

It's in a conference proceedings. I can send you a reprint if you send
your surface address (not to CSGnet, but to my private address, which
now is mmt@mmtaylor.net--I often find it is a few days between my
scans of CGSnet, and I don't read everything).

As for the "predictive increment," Rick proposed a modification to the
standard elementary control unit, like this:

                                   > reference
       --|derivative|->-|weight|->-+
      > >
      ^ V
      >perceptual signal |
      +------>--------->------comparator-------
      > >
     perc output
    input function
    function |
      > V
      > >
======^========================================V=============
      > >
      > >
      +---<--environmental feedback function---
      >
      ^ disturbance
      >

The "predictive increment" is the weighted derivative of the perceptual
signal, which is added to the reference signal (it could equally well
be added with the opposite sign to the perceptual signal before the
comparator, or to the error signal; mathematically they are all the
same). That's what Rick added to the model to fit some of his results,
and I found that doing so made a great improvement to the fits as well.
What I found in 1994 was that the best-fit models used a higher
weight for the predictive component when the subjects on placebo had
lost a night's sleep, but not when the subjects on amphetamine or
modafinil had lost sleep.

Martin

[Martin Taylor 991116 23:56]

[From Bill Powers (991116.1208 MDT)]

Unless and until Bill or someone chooses to comment on my description of
the interplay of digital and analogue control systems, I have retired once
again from talking about the architecture of the hierarchy. But I thought
that CSGnet might be interested in an experiment that is to start
shortly.

I did comment on it, but have received no answer to my last post on the
subject, in which I showed that your proposal involves a problem with
level-skipping in setting reference signals in the middle of the hierarchy.

Sorry. I had obviously misread your posting. I saw the words "level-
skipping", but not where you showed whering the level-skipping
occurs. I see none, at least not in the sense the term is ordinarily
used. As I understand "level-skipping," it implies that a unit at level
N contributes to the reference levels of units at levels N-1 and N-k,
where k>1. There's no _technical_ reason that such connections can't
exist. There's a _practical_reason that you have well described.
What level-skipping does is set up a situation in which a unit
tends to set up an intrinsic conflict (setting reference values for
the angle of the steering wheel and for the location of the car in
its lane at the same time). The kind of connections I describe have
none of that character.

The higher analog levels would try to counteract the effects.

Yes, this always happens in a many-to-many set of connections between
levels. We have diagrammed this often, and shown how the problem
resolves itself if and only if there are enough degrees of freedom in
the lower levels to allow all the upper level systems to bring their
perceptions to their reference values simultaneously. This is exactly
what happens in the connections I describe. There is no level skipping
in the "forbidden" sense.

And I must apologize again, as,I hadn't seen that you had followed
through my moment-by moment description of the changes of signal
values when digital and analogue units combine to set refrrences for
lower-level analogue units, nor had I seen where you described in what
way there is a difference between the situation in which one of the
intermediate level units is digital and that in which an intermediate
analoge unit has an abrupt change in its reference value.

You said that
there were no higher _digital_ levels to cause such a problem, but I
pointed out that it is the higher _analog_ levels we have to worry about.

Perhaps I might see where the worry comes from if you describe how
signal values vary in the way I did, and show how the "interference"
of a category output would cause trouble. I can't see it from what
you have said so far.

Your experiments look interesting. I look forward to seeing the results.

I sure hope there are some! It's been a long slog getting the programmers
to get them right. Actually, they run under Windows 95, but on a network.
I wonder if there is any possibility of running them on a single machine,
and if so, if it would be permissible for me to post them here for people
to try. I'll ask, but only after I have run them on at least one team of
subjects. At the moment, I've been so much trouble to the programmers
(I was told they "ran out of money" about 2 months ago) that I hesitate
to ask anything beyond the most trival, and that only if it is
important to the running of the experiment:-(

Martin

[From Bill Powers (991118.0746 MDT)]

Martin Taylor 991116 23:56--

Sorry. I had obviously misread your posting. I saw the words "level-
skipping", but not where you showed whering the level-skipping
occurs. I see none, at least not in the sense the term is ordinarily
used. As I understand "level-skipping," it implies that a unit at level
N contributes to the reference levels of units at levels N-1 and N-k,
where k>1. There's no _technical_ reason that such connections can't
exist. There's a _practical_reason that you have well described.
What level-skipping does is set up a situation in which a unit
tends to set up an intrinsic conflict (setting reference values for
the angle of the steering wheel and for the location of the car in
its lane at the same time). The kind of connections I describe have
none of that character.

Only because you don't want them to have that character. If you actually
hooked up the systems as you describe them, they _would_ have that
character whether you want them to or not. When you rely on words alone,
you can make any description seem reasonable, because you don't have to
prove that it's right.

Anyway, you still don't understand my critique. The problem of
level-skipping is not what you describe: setting references for the angle
of the steering wheel and the location of the car in its lane at the same
time. The problem is setting a reference angle for the steering wheel
without considering that some other, higher, system, may _also already_ be
setting a reference angle for the steering wheel. That other higher system
will immediately adjust its output to restore the steering wheel to its
"proper" angle, or will adjust something else to compensate. In your verbal
description you said that the whole system would simply adjust itself to
solve the simultaneous equations just as the analog systems alone would do.
But that is incorrect, because the digital system can't be adjusted
continuously.

The higher analog levels would try to counteract the effects.

Yes, this always happens in a many-to-many set of connections between
levels.

But if the systems are properly orthogonal it does _not_ happen. You can
control x+y completely independently of x-y, with no conflict at all.

We have diagrammed this often, and shown how the problem
resolves itself if and only if there are enough degrees of freedom in
the lower levels to allow all the upper level systems to bring their
perceptions to their reference values simultaneously. This is exactly
what happens in the connections I describe. There is no level skipping
in the "forbidden" sense.

That doesn't apply when one of the equations admits of only binary values
of its variables. The chances are extremely small that a solution will
exist. The analog equations operate in the domain of real numbers; the
digital system in the realm of binary integers. If you try to mix them
you're going to get very wierd results.

And I must apologize again, as,I hadn't seen that you had followed
through my moment-by moment description of the changes of signal
values when digital and analogue units combine to set refeences for
lower-level analogue units

I read it through but it was just a lot of arm-waving -- an analog pointing
system with an error of 30 degrees (as I remember the number) for example!
What did you imagine that the output function would be doing? You can't
analyse a system that way, by verbally describing what you think the
variables are going to do moment by moment. You must come up with either an
analytical mathematical solution, or a simulation. I get your heavy-handed
message, but I didn't comment on that description because I really didn't
feel like telling you what I really thought of it. There's enough
unpleasantness on the net already.

, nor had I seen where you described in what
way there is a difference between the situation in which one of the
intermediate level units is digital and that in which an intermediate
analoge unit has an abrupt change in its reference value.

Martin, I'm just not going to get sucked into this. When you have worked
out a mathematical analysis, or have simulated what you're talking about,
I'll listen. But all this verbal ****, unsupported by mathematics, isn't
worth giving it either your time or mine.

Best,

Bill P.