[From Bill Powers (950112.0805 MST)]
Martin Taylor (950111.1515)--
But Martin,
it was YOU who proposed it, in talking about Little Baby.
Huh!!??!!!???
What on earth are you talking about? What has the development of new
ECUs using Genetic Algorithms got to do with the notion that the
structure of the hierarchy is inherited?
Hmm. I'm beginning to suspect that you aren't literally talking about
inheritance from parents to child, but about a mathematical method
divorced from the sweaty mechanisms of genetic inheritance. Are you
talking about using the GA inside a single organism during its own
lifetime, so there are no "parents" the the literal sense, but only
predecessor ECUs inside one organism? Are you using "parent" and "child"
in a metaphorical sense only? I think that many of us had the impression
that you were talking about a husband and wife producing children whose
control systems were combinations of those in the husband and wife,
which of course would imply inheritance of a complete hierarchy.
ยทยทยท
-----------------------------------------------------------------------
Martin Taylor (950111.1820)--
RE: sleep study results
Most of the time when we discuss the fitting of control system
models to real data for the tracking task (compensatory or
pursuit), the model has an output function that is a pure
integrator, and there is a finite transport lag. As I understand
it, this model usually correlates around 99% with the actual handle
movements used by the subject, and it is this model that Tom has
used to predict his own tracking behaviour over a 5-year period.
Is this correct?
Yes, this is correct. The correlation varies with difficulty of the
task, but is usually 0.95 or so. 0.99+ is achieved quite regularly for
relatively easy tasks and with a well-practiced subject.
For the six subjects, under conditions where, so far as I can see,
the model should fit best, the median correlation of model to data
is about 0.93, 0.91, 0.90, 0.90, 0.84, 0.83. Only a total of 32
out of the 480 runs produced correlations of >0.99, 13 of them by
one subject. This doesn't seem to me to be very good, especially
in view of comments that one ought perhaps to think of throwing out
models that don't fit better than 0.95 correlation. I'm wondering
what should be done to improve the fit.
I suggested a number of strategies for getting good data when the
project was starting out. As I recall, it was not possible, in the last-
minute rush to get the project under way, to implement many of them.
Also, so many different conditions were studied that it was not possible
to get much practice with each one before the effects of drugs and
sleeplessness became important.
The main strategy was to make sure that each subject had practiced each
task under each condition to asymptote, prior to looking for real
effects of drugs on behavior. Each control task was probably novel to
most participants, and some were perceptually difficult. Also, I
suggested that the degree of difficulty (bandwidth of disturbance)
should be selected for something like 5 to 10 per cent tracking error (I
forget the exact number) when learning was complete. This makes the
experienced difficulty about the same for all tasks and subjects (very
roughly). We already knew that the k factor would vary with degree of
difficulty. I had hoped to get some data on that from this experiment,
but the sparsity of the sampling precluded that.
I also pointed out that it is probably a good idea to make sure that all
the subjects were using the same output degree of freedom, and proposed
that a fixed elbow rest be used, and a mouse pad that would assure
minimum slippage of the mouse ball. These are secondary considerations,
but could be important (particularly if the mouse were being used on a
surface that allowed a great deal of slippage). We discussed using an
A/D converter and joystick to eliminate that problem and make the arm-
hand movements more standardized. I also suggested trying to make sure
that the eye-screen distance was standard, perhaps by using a chin-rest.
These latter considerations are obviously not very important for a well-
practiced subject because the same subject would probably settle into a
more or less standard configuration (for me, it's set by the best focus
distance of my computer glasses and hence my chair placement relative to
the normal location of my mouse pad).
The model uses constant parameters during a prediction run, so there is
the implicit assumption that the participant's parameters are constant
during the run. During learning, this is not true -- not only are the
parameters changing during a run, but the participant is reorganizing in
many other ways, such as the way the mouse is held, the position of the
body, what aspect of the display is being watched, and so forth. The
effects of reorganization would be most pronounced on the tasks that
were the most difficult to grasp, such as the pendulum task or the one
with the rotating relationship between direction of cursor movement and
direction of handle movements. Participants might take a lot of practice
to master these tasks.
The fact that I could produce data that the model would fit very
accurately rests largely on the many, many runs I did while testing the
model, and the fact that I understood all the tasks from the start.
The results you give would be more informative if they weren't averaged
over tasks, difficulties, and types of disturbance. I suspect that the
model fit best for the first task, simple pursuit tracking, and much
worse for some of the others. The obvious explanation would be that on
the harder tasks, learning was far from complete by the end of the first
experimental day -- only a few repetitions of the task.
I'm interested in getting a model that fits well all the time, and
for which the parameters don't vary within a display type for
different disturbance waveforms. Then variations in the parameter
values during normal but sleepy tracking would have a better chance
of meaning something about the subject, and about the particular
effects the anti-sleep drugs might have.
No amount of fiddling with the the model is going to help if the data
were taken while subjects were still far from being proficient at most
of the tasks. The basic problem is that the experimenters wanted to get
too much out of the data by using 5 experiments with 2 degrees of
difficulty each and 3 different kinds of disturbances -- 30 different
conditions! There simply wasn't time to establish a baseline for each
condition, and to be sure performance had stabilized so differences in
performance would be meaningful. I raised this objection a number of
times, but as you will recall you couldn't sell it to the experimenters.
The best you can do is to take the tasks where the subjects showed the
least tracking error by the end of the first day, and compare runs under
the same conditions for the same experiment each day. There is no way to
get good information by averaging together experiments of different
types and disturbances of different difficulties.
-----------------------------------------------------------------------
Joel Judd (950111)--
RE: your nice commentary on "Wasting time":
The hell of it is that "performance objectives" is a perfectly good idea
if one understands that the objectives are perceptual. The idea of a
hierarchy of objectives is great, if the hierarchy bears some
resemblance to the real human hierarchy. The problem seems to be that
the overall concept is fine, but when it comes to implementing it nobody
can come up with anything but the same old ghastly boring platitudes.
Along the same lines, more or less, I've been watching (on C-SPAN)
various committee meetings on education and jobs. It struck me that with
all this job retraining, the idea of a person's interests in life, the
idea of an interesting career, sort of goes out the window. People
become just re-defineable elements in an amorphous "labor force." A
compositor at a newspaper finds the system going all-electronic, and
retrains to become a computer operator. A computer operator finds the
system becoming totally self-running, and retrains as an automobile
mechanic or a short-order cook or a Welcome Wagon driver. And so forth.
The result of this is surely going to be a population with no abiding
interests in anything, no concept of a job except as a way of making
money. And nobody will develop the levels of skill that come from
devoting a whole career to getting better and better at something.
Everyone will be an apprentice, but there will be no mentors. The level
of skill at all jobs will just get lower and lower.
And what will you teach people in school? There's no point in going very
far in any subject, because by the time the student graduates that
subject may not be useful for making a living. We'll be giving PhDs in
"Dilletante."
---------------------------------------------------------------------
Samuel Saunders (950110.1845 EST)--
It could also be interesting to set up a robot with a few intrinsic
references, and some sort of reorganization system, and a few
sensory inputs, and see what it takes to build a control system.
That's an idea that gets tossed around in PCT every now and then. What
we need is someone who is already equipped ($$) to turn out hardware and
electronics, and has all the required skills, but who hasn't committed
to any standard way of doing things. Hard to find: the ones you hear
about the most (like Rodney Brooks) are _definitely_ committed to a way
of thinking, and about the only way they'll get into PCT is by
rediscovering it for themselves. Which I'm sure they will do,
eventually.
I could design a robot, mechanically and electronically, but the hell of
it is that I'm too damned old to hold a soldering iron steady and see
all the tiny bits, and I have no money to get things built. Bruce Abbott
suggested just using RC servos, but they don't have the right properties
for modeling an arm and there's still a lot of ancillary stuff that
would be needed. What we need is some young guy who doesn't anticipate
the difficulties but just goes ahead and gets the project going.
I'm not too optimistic about starting right in with reorganization. That
is a much more complex and difficult subject than it seems on the
surface. I'd sooner just design some interesting PCT-type devices,
perhaps self-tuning, that will start people thinking about control in
some way other than inverse kinematics and dynamics, concepts that are
really putting everything on hold as far as PCT is concerned. Until the
intellectual/robotics community starts thinking in terms of control of
perception, we just won't have enough people working on this to make any
big breakthoughs. The Zeitgeist still isn't right for PCT.
------------------------------
Another subject: how are you coming with getting organized to join in
the programming? Have you been able to run any of the Pascal source code
that's been floating around? Any comments on what has been going on so
far?
-----------------------------------------------------------------------
Peter Burke (950112) --
Subj: RE: Reply to junk mail: please add your replies
hmmmmm. Wonder if pct can predict the reaction to this one!!!!!!!???
Peter
By the time I tried to send my message to the America On Line email
address, that address had disappeared. Somebody got there first. The
first address worked; I haven't checked it again.
As to the latest one (about the BBS internet service), I'll be sending
my protest soon. The reaction depends, of course, on whether the
disturbance is just from one guy or is so overwhelming that opposing it
would cause more error than switching to some other approach. I presume
that the goal is to make money. You don't make money by trying to hold
back a hurricane.
-----------------------------------------------------------------------
Best to all,
Bill P.