How well might a model do?

[From Rick Marken (950116.1230)]

Me:

Chuck Tucker and Martin Taylor have recently posted data that seemed
(to them) to pose "problems" for the basic PCT model of behavior.

Martin Taylor (950116 12:10) --

It's becoming a tedious refrain: "Rick, where did you get the idea I
said anything like that?"

Well, let's see. Could it be things like:

I commented that these results did not provide the 95% plus correlation
with the data that is often cited as a requirement for believing in the
validity of a model.

Now, call me crazy, but it sounds like you are saying that your results did
not provide the 95% plus correlation with the data that is often cited as a
requirement for believing in the validity of a model. Is it really such a
stretch to say that you are saying that these data pose a problem for the
basic PCT MODEL. Isn't "invalidity" a problem for a model?

You act like I'm accusing you of "disloyalty" to PCT; I'm not. The data you
cite does pose a problem for the basic PCT model. I wasn't opposed to your
bringing up the fact that this data poses a problem; I was pointing out that,
before concluding that it DOES pose a problem, one should be sure that the
data are good (that the data really represent the process of control). This
is especially true when you are dealing with a model that has regularly
accounted for 99% of the variance in the very tasks you are using. Before you
go off looking for variations in the model that can give you better fits to
the data, it is best to figure out why your results are not like those
obtained in equivalent experiments that have been done before. Looking to
revise and improve the PCT model based on your data is like a chemistry
student looking to revise and improve the periodic table based on the results
of his lab results. I'd look for evidence of sloppy procedure before looking
for ways to "improve" the fit of the basic model.

Besides, it looks like the basic control model fits your data awfully well
(Martin Taylor (950106 11:10)). The fits, measured as correlations, are
generally greater than .98. The worst fit was .94 and in that case the
tracking error was nearly double what it was in the other runs.

One thing you might do (and perhaps we should always do this when we fit
model to data) is estimate the proportion of the variance in the data that is
_predictable_ by the model. This can be done if we can have repeated runs
with the same subject controlling a variable against the same disturbance.
Lets say that we measure variations in the cursor, c, handle, h and
disturbance, d, on two separate runs of a compensatory tracking task. On both
runs, d is exactly the same. So the "split half" correlation, r12, between
variations of d (d1 and d2) on the two runs is 1.0. The "split half"
correlation between variations in c and h on the two runs will not be as
high. The "split half" correlation between variations in c on the two runs is
likely to be low (unless control is poor) -- say r12 is about 0.45. The
"split half" correlation between variations in h on the two runs is likely to
be quite high -- but not perfect -- say about .97. Assuming that these
"split-half" correlations reflect the correlation of two quantities each
having the same predictable part but independent random parts, the proportion
of variance in the SUM of the two parts (c1+c2 or h1+h2) is

Vp = r12/[r12+1/2(1-r12)]

where Vp is the estimate of the proportion of variance in the sum that is
predictable and r12 is the "split half" correlation between variations in the
two components of the sum. (This formula was derived by my graduate advisor,
Al Ahumada). The value of Vp means that, if we use the basic PCT model to
account for the sum of, say, the handle movements on run 1 and run 2 (h1+h2)
we should expect the model to pick up no more than .97/(.97+.015) = .984 or
98% of the variance in the sum. That is, the maximum possible correlation
between model handle movements and the SUM of subject handle movements over
the two runs (h1+h2) is sqrt(.98) = .99. The maximum possible correlation
between the model and the sum of subject cursor movements over the two
runs (c1+c2) is sqrt (.45/(.45+ .275)) = sqrt(.62) = .79.

This "split-half" technique for estimating predictable variance requires that
we be able to present exactly the same disturbance on two separate
experimental runs, which is no problem in our computer experiments. It has
the virtue of letting people know how well they can expect to do at
predicting the variance in the variables in a control experiment uning ANY
model. The random component of the variance in these control tasks probably
comes mainly from slipping mice, lapses in attention and spontaneous
reversals. These things are not likely to occur on both runs of an experiment
-- nor are they likely to occur at the same point in any run so this "split-
half" technique is a good way to tell when these untoward contributions to
variance are present. Using summed data may not be very PCT-like but when Vp
is very high (.99+) then the sum of two runs is, for all practical purposes,
the same as a single run. Vp is actually a good way of measuring the
"quality" of the data. Even if control is poor, Vp will be high if there are
few contributions to the random component of the variance.

Best

Rick

[From Bill Powers (950119.2210 MST)]

Martin Taylor (950116.1100) --

Martin, I've looked at some of the screen displays for what I think are
the same runs you were tabulating. In particular, I looked at the
111...B5 series, trial numbers 002, 038, 074, 110, 146, 182, 218, 254,
290, 326, 362 and 398. These use, I believe, gaussian disturbances at
difficulty level 6.

It's clear from the displays that this person was very nearly unable to
track at all. The handle trace shows slight movements up and down
against the disturbance, and shows long-term wanderings that don't
correlate with anything. Once in a while the person manages to get in
synch and approximates tracking for a cycle or two, and then loses it
again. It is ludicrous to think of trying to fit a model to this kind of
data. I'm surprised that the analysis program didn't blow up.

When you give people adequate practice so they actually learn to
control, and when you keep the disturbances within the range where the
person can actually show some skill, the model will fit the performance
very well. None of that was done.

     If the model really means something about the subject, then at
     least for the "A" (pursuit tracking) runs the k and d parameters
     ought to be the same within a column except for measurement noise,
     and so they should for the "B" runs. But they aren't. The slow
     disturbance runs typically show faster integration coupled with
     longer loop delays. In other words, it LOOKS as if the subjects
     take longer to see the effect of their output on error (actually,
     the effect of the disturbance,

Actually, in experiments that are properly done, we find that the k
factor regularly _increases_ with difficulty rather than decreasing. The
apparent decrease you see is meaningless because the fit of model to
data is so poor that the calculation of k is meaningless -- as are your
conjectures about changes to the model that might salvage the situation.
You can't make sense of these results just by blindly accepting tables
of numbers: you have to call up the actual graphical displays and look
at what was happening. In the runs mentioned above, the subject was
simply not able to control, and performance was changing radically even
during a single run.

The PCT model we are using is not magical. It fits control behavior very
well, but not random flailings about which is what we seen in the bulk
of the experimental data. Face it: the experiment was badly designed and
badly done. I've been plotting k factors and tracking errors against
time for single subjects under single conditions, and even if the data
were very good, there wouldn't be enough time resolution to see any but
the grossest and slowest-changing effects. When I get to selecting the
data where the model is able to fit the data, we will find regular
tracking (even if not very close tracking -- I've done some preliminary
looking), and the numbers might have some meaning for those runs. We'll
see.

You seem to be blaming the results on the model, claiming that a good
model ought to be able to predict performance under all these
conditions. That is simply nonsense, when you look at the graphical
results -- what the participants actually did. You couldn't even fit
Newton's law to the acceleration of an object in a gravitational field
if your clocks were poorly constructed, you tried to make the
measurement in an earthquake, and you didn't know whether the object
were falling through a vacuum or water.

This experiment could have been done orders of magnitude better. In the
first place, only one or two tasks should have been used with a single
degree of difficulty, and before any changes of conditions were
introduced, all subjects should have practiced until they had settled
down to regular control behavior (whether good or bad). If we had
started with subjects who showed stable characteristics, the effects of
sleeplessness and drugs might have been visible, if they existed. If we
had had the same conditions being tested every hour or even more often,
we could have checked not only on systematic changes in parameters, but
on the variability in the parameters that might have shown when or if
reorganization appeared.

The problem was that the experimenters tried to get too much out of the
data by introducing too many conditions and thus reducing the data about
any one condition to just a few samples. And most of the subjects never
came close to asymptote on most of the tasks before the sleeplessness
and drug effects came into effect.

     In fact, the more I think about it, the more I persuade myself that
     the dead zone would be a useful parameter to try out.

Go ahead and try it -- but before you do, look at the graphical
displays! I think you will quickly give up on that idea. I know what
dead zone you're thinking of, but it amounts to only four or five
pixels, not 50 to 100. And a simple dead zone won't even account for the
4 or 5 pixels in good tracking data. I know; I tried it.

NO model you can think of can be made to fit data from a run in which
there are radical changes in the person's tracking behavior from moment
to moment during the same run. And that is what you get when you make
the task too difficult to master, and allow too little practice to
achieve mastery.

···

----------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 950116 11:00]

Rick Marken (950116.1230)

I commented that these results did not provide the 95% plus correlation
with the data that is often cited as a requirement for believing in the
validity of a model.

Now, call me crazy, but it sounds like you are saying that your results did
not provide the 95% plus correlation with the data that is often cited as a
requirement for believing in the validity of a model.

That's exactly what I was saying.

Is it really such a
stretch to say that you are saying that these data pose a problem for the
basic PCT MODEL.

I think it is a VERY large stretch.

Isn't "invalidity" a problem for a model?

Yes. That's the problem I'm addressing.

You act like I'm accusing you of "disloyalty" to PCT; I'm not.

Glad to know that. It sure sounded as if you were.

The data you cite does pose a problem for the basic PCT model.

Well, if you see them as posing a problem for the basic model, you see more
than I do. I see a problem only for the minimal "pure-integrator-
with-loop-delay" model.

···

===============================

Before you
go off looking for variations in the model that can give you better fits to
the data, it is best to figure out why your results are not like those
obtained in equivalent experiments that have been done before.

The two parts of that sentence seem to contradict each other, don't they?
If one believes in PCT, it is only variations in the model that can show
why the results are not like those in equivalent experiments that have
been done before. Perhaps one needs other control loops (e.g. adaptive
gain) or perhaps the best one can do is, as Tom suggests, to fit short
segments of the data and accept that there is parameter variation. However,
from my own subjective impression doing the task, and from observations
of subjects doing the "jump disturbance" task, and from Bill P's long-ago
suggestion, it seemed to me that it might be profitable to explore the
notion that there is a don't-care zone (dead zone). If the cursor is
sufficiently close to the target, the subject just doesn't bother to make
it closer, even though the error may be clearly visible. The "motivated"
and practiced old-hands might persistently try to correct ANY detectable
error, whereas less involved subects might say to themselves "that's good
enough." And if this is the case, the width of that dead zone might grow
with sleepiness.

Besides, it looks like the basic control model fits your data awfully well
(Martin Taylor (950106 11:10)). The fits, measured as correlations, are
generally greater than .98. The worst fit was .94 and in that case the
tracking error was nearly double what it was in the other runs.

Sorry for the dating error. Should have been 950116.

Yes, this subject on these trials fitted reasonably well. But check the
other data that I have posted. This subject (of the six I've looked at)
had the best median of the fitted correlations, and that median fit was
0.93, using the first four runs that you cite, plus other runs over the
same period. Check out, for example, the data I posted 950113 16:30,
and reproduced below, for Subject 111, the first four columns. At the
very least the parameters for the fits should be the same for rows 1 and
3 and for rows 2 and 4, but with the quality of fit shown in the table,
is it worth worrying about the parameter values?

Subject 111:
    Monday>----------Tuesday-----------Wednesday-------Thursday------|Friday
hour 12 08 14 20 V 02 08 14 20 02 V 08 V14 12
Offset
0:20 .884 .941 .959 .960 .875 .938 .953 .913 .966 .945 .888 .972
1:20 .884 .941 .959 .960 .875 .938 .953 .913 .966 .945 .888 .972
2:40 .961 .892 .989 .966 .987 .956 .953 .965 .964 .968 .701 .991
4:20 .991 .934 .996 .990 .988 .995 .924 .945 .970 .992 .931 .988

For the sake of discussion, here are the fitted parameter values for k
(integration rate in units of 1/sec) and d (loop delay in samples) for the
same trials (d = 0 or d=29 probably means the fit was out of range, since
29 is the maximum permitted value).

In each set, k is above d. The code to the left is A=pursuit tracking,
B=compensatory tracking, 3 or 6 = difficulty level, g= Gaussian disturbance.

Subject 111
B6g 0.20 0.26 0.50 0.70 0.28 2.18 2.24 0.30 0.67 0.20 0.39 0.70
       0 2 3 3 3 4 5 29 3 29 3 5

A6g 8.3 9.6 9.5 13.4 13.2 13.0 11.3 14.1 7.4 13.2 13.8 11.4
       6 8 7 9 8 8 11 12 9 12 15 9

B3g 2.6 3.9 4.6 3.6 4.1 3.8 2.5 1.5 3.0 1.5 0.7 4.4
       5 7 22 12 11 11 8 5 12 6 29 8

A3g 12.4 7.4 12.6 17.9 9.2 10.5 12.0 12.1 8.6 18.8 31.7 14.5
      10 25 10 13 16 12 16 13 12 12 13 16

If the model really means something about the subject, then at least for the
"A" (pursuit tracking) runs the k and d parameters ought to be the same
within a column except for measurement noise, and so they should for
the "B" runs. But they aren't. The slow disturbance runs typically show
faster integration coupled with longer loop delays. In other words, it
LOOKS as if the subjects take longer to see the effect of their output
on error (actually, the effect of the disturbance, given the fitting
algorithm), and then correct it faster when the disturbance is slow than
when the disturbance is fast. I don't believe that is what is happening.
But intuitively, it might be what happens if there is a dead zone, since
the slow disturbance takes longer to move the error out of any dead zone
than the fast disturbance does. In fact, the more I think about it, the
more I persuade myself that the dead zone would be a useful parameter to
try out.

=============

One thing you might do (and perhaps we should always do this when we fit
model to data) is estimate the proportion of the variance in the data that is
_predictable_ by the model.

I have a variant on that, which is incorporated in the current analysis
program (if it is properly debugged--I was working on it yesterday).

The idea is that the model IS a control system, controlling against a
disturbance to within some measured RMS error. The subject, likewise,
is a control system with some measured RMS error. If the model and the
subject are quite independent and different control systems, then the
mean square (MS) deviation between the model and the subject should be
the sum of the two tracking MS errors. But if the model control system
is like that of the subject, then the MS deviation between them should be
much less than the sum. (The correlation doesn't show this, since the model
could have a different scale from the subject and still show a perfect
correlation).

The measure I am using is intended to be 1 minus the ratio of the
MS (model-real deviation) to the sum of the MS (model-disturbance)
and MS (real-disturbance) tracking errors. A value of 0.0 means that the
model is worthless in describing the data, and a value of 1.0 means that the
model describes the data perfectly.

Unfortunately, I think I must have a programming bug, because the values
I get are scattered all over the place, mostly in the range of 0.2 to 0.7.
I've already double-checked the program, but triple and quadruple checks
seem to be in order.

===================

This can be done if we can have repeated runs
with the same subject controlling a variable against the same disturbance.
Lets say that we measure variations in the cursor, c, handle, h and
disturbance, d, on two separate runs of a compensatory tracking task. On both
runs, d is exactly the same.

We do have such conditions, but the "sleepiness" state is liable to be
quite different between them.

Vp = r12/[r12+1/2(1-r12)]

(This formula was derived by my graduate advisor, Al Ahumada)

An honoured name, indeed. I seem to remember work of his on detection of
signals with identical noise waveforms on different trials. Presumably
that's where the formula comes from?

It's a good idea, and I'll check the data to see whether there is any way
of applying it sensibly to this experiment.

Maybe the formula could work in comparing the model and real handle in any
case, taking the disturbance (delayed) to represent the predictable part
and the model or real tracking errors to be the independent random parts.
I'll have to think about it, or perhaps someone else could say for sure.

Martin

[Martin Taylor 950120 13:30]

Bill Powers (950119.2210 MST)

You can't make sense of these results just by blindly accepting tables
of numbers: you have to call up the actual graphical displays and look
at what was happening.

It's not so easy for me to look at the graphical displays of the tracks,
but from what you say, it should be worth my while to put some effort into
doing so. Certainly the graphics often provides a different slant on any
kind of data. But until then...

1. I have found a couple of minor bugs in the fitting program. Fixing them
improves the model fit correlations somewhat, but doesn't change the general
nature of the problem at hand.

2. Obviously, if a subject changes the manner of tracking during the run,
no single model fit will be very good. Your eyeball suggests that this
was happening with Subject 111 on task B5g, but is it possible that what
you are seeing is instead the higher-frequency components of the disturbance,
which are less accurately controlled than the low-frequency components?

It is true that PCT predicts that there will be reorganization when the
subject is not controlling well, which would be the case with a difficult
disturbance, but it might also be the case that the subject is controlling
consistently in the spectral region where control is possible. To check
this, one would need to filter both the disturbance and the handle tables.

Actually, in experiments that are properly done, we find that the k
factor regularly _increases_ with difficulty rather than decreasing.

That, in itself, suggests the existence of a problem with the model.

3. >> In fact, the more I think about it, the more I persuade myself that

    the dead zone would be a useful parameter to try out.

Go ahead and try it -- but before you do, look at the graphical
displays! I think you will quickly give up on that idea. I know what
dead zone you're thinking of, but it amounts to only four or five
pixels, not 50 to 100. And a simple dead zone won't even account for the
4 or 5 pixels in good tracking data. I know; I tried it.

I am glad to know the results of your trials. I'd asked you before about
whether you got anything from them. Actually, I was thinking of a dead zone
of 1 or 2 pixels, not 4 or 5. And I am attempting to try it, but as I got no
response from you about a satisfactory fitting method, I was trying out
some ideas of my own, which didn't work yesterday. Just from sampling,
though, I get the impression that including a dead zone can bring up the
fitting correlations for the jump disturbances. And those disturbances
ought to be fit by models with the same parameters as fit the smooth
disturbances. They provide no apparent challenge to the subjects!

NO model you can think of can be made to fit data from a run in which
there are radical changes in the person's tracking behavior from moment
to moment during the same run. And that is what you get when you make
the task too difficult to master, and allow too little practice to
achieve mastery.

No, but though this isn't data, I should point out that the subjects typically
thought that the tracking tasks were "easy" except for the pendulum, and
the small disk on large circle before they got the hang of it. So if they
were changing what they were doing during the runs, changes were presumably
going on at some level of which they were unaware.

I'm not giving up on these data, and I hope you don't, either.

Martin

Tom Bourbon [950123.0849]

[Martin Taylor 950120 13:30]

Bill Powers (950119.2210 MST)

You can't make sense of these results just by blindly accepting tables
of numbers: you have to call up the actual graphical displays and look
at what was happening.

It's not so easy for me to look at the graphical displays of the tracks,
but from what you say, it should be worth my while to put some effort into
doing so. Certainly the graphics often provides a different slant on any
kind of data. But until then...

Martin, are you saying you ran all of the statistical analyses on your
subjects, then began posting messages about how the data raise problems for
the PCT model and suggesting changes in the model, ALL without ever looking
at the graphical plots of the runs?! If so, that strikes me as unbelievable.
It's not that "graphics often provides a different slant," but that unless
you look at the results first, you don't know whether *any* analysis will
work. Remarkable!

Later,

Tom

[Martin Taylor 950124 11:20]

Tom Bourbon [950123.0849]

Martin, are you saying you ran all of the statistical analyses on your
subjects, then began posting messages about how the data raise problems for
the PCT model and suggesting changes in the model, ALL without ever looking
at the graphical plots of the runs?! If so, that strikes me as unbelievable.
It's not that "graphics often provides a different slant," but that unless
you look at the results first, you don't know whether *any* analysis will
work. Remarkable!

As for whether looking at the tracks is something that one would naturally
do first, I looked at the tracks when we were doing pilot studies, for
quite a few instances. I saw nothing in them that would have led me to
believe that the model fits would be questionable, and have heard nothing
previously from you guys to say that one should look at a track to see whether
it should be fitted by the model before doing so. In fact, it seems to me
scientifically quite illegitimate to pre-select for model testing only those
occasions on which the model is seen to be likely to work.

Once there is a suggestion as to why the model might not have worked in a
particular case, then there is reason to look at the tracks to see whether
that suggestion appears sensible. But so far, Bill has not responded (in
any postings that have arrived here) to my question as to whether
the tracking errors on the faster disturbances might possibly be the
higher frequency part of the disturbance, which is more poorly controlled
than the lower frequency components. If it is not trivially easy to see
that, then what use is it to even look at the tracks without specific
guidance as to what to seek?

No. I think it is much more defensible to analyze according to the model
suggested, and THEN to look to the graphics for clues that might help to
see what kinds of deviations there might be.

···

===================

The only way I know to look at the graphical tracks would be either to
write a plotting program or to run Bill's Viewdata program on the PCs
from which the data came. Those PCs are not readily available, and I
don't know whether the data still reside on them--or whether the programs
do. It took quite some time to set up originally, and unless there is
some strong reason to spend equivalent time, I'm not likely to do it.

The correspondence to date suggests that a strong reason might well exist,
and I might try to seek out those PCs. But if the data are no longer on them,
then that's a whole 'nother hassle.

Martin

Tom Bourbon [950125.1058]

[Martin Taylor 950124 11:20]

Tom Bourbon [950123.0849]

Martin, are you saying you ran all of the statistical analyses on your
subjects, then began posting messages about how the data raise problems for
the PCT model and suggesting changes in the model, ALL without ever looking
at the graphical plots of the runs?! If so, that strikes me as unbelievable.
It's not that "graphics often provides a different slant," but that unless
you look at the results first, you don't know whether *any* analysis will
work. Remarkable!

As for whether looking at the tracks is something that one would naturally
do first, I looked at the tracks when we were doing pilot studies, for
quite a few instances. I saw nothing in them that would have led me to
believe that the model fits would be questionable,

We must have learned our science (psychology and psychophysics) in different
universes, Martin. :slight_smile: To my frequent, deep dismay, while I was still
an undergraduate I learned that pilot data from traditional experimental
designs are without exception the reverse of data collected during the runs
that count. The reliability of this phenomenon, at least when I ran the
experiments, led to my colleagues dubbing it, "the Bourbon principle."

A little more seriously, I learned that it is very dangerous to run
experiments and analyze the data, without first looking closely at the raw
data. Wasn't that part of the conventional training in experimental
psychology?

and have heard nothing
previously from you guys to say that one should look at a track to see whether
it should be fitted by the model before doing so.

Where have you been all of these years, Martin? :wink: Don't you remember all
of our posts, and our articles, in which we repeated the litany: "first the
phenomenon, then the model." (In case you forgot it, I'm *certain* Rick
will be delighted to remind you. :wink: If we observe what we believe is an
instance of the phenomenon of control, then we can do the "Test for a
Controlled Variable" and if that works we can apply the PCT model. Absent
the phenomenon of control, we do not need a theory to explain control.
Everything in PCT science starts with observation of the phenomenon of
control.

In fact, it seems to me
scientifically quite illegitimate to pre-select for model testing only those
occasions on which the model is seen to be likely to work.

"scientifically quite illegitimate." Those are strong words. I always
thought, and taught, that people often make fools of themselves, or at
least they waste a lot of time, when they apply elaborate methods to
analyze data, then take off into soaring flights of theoretical fancy,
all without ever looking at the simplest possible representations of their
raw data. Does the definition of "legitimate science" turn on a blind
application of analytical methods, followed by revisions to theory, all
without ever looking to confirm that the phenomenon under study occurred?
If so, I am in big trouble, for that kind of science strikes me as being a
bit too much like the work of the Schoolmen -- the Scholastics, and long
before them, the Sophists.

Once there is a suggestion as to why the model might not have worked in a
particular case, then there is reason to look at the tracks to see whether
that suggestion appears sensible.

This looks very odd to me, Martin. You say you would begin when there is
"a suggestion" as to *why* the model might not have worked, *then* look at
the data for the first time to see whether the *suggestion* appears
sensible? For one thing, I don't know what you mean by "a suggestion." Do
you mean the analysis "suggests" something to you -- it, as agent, acts
upon you? Or do you mean that after you see the results of the analysis,
you conclude (hypothesize, imagine, guess) that there may be a particular
reason for the results?

All of that is irrelevant next to what I see as the truly odd part of
your remarks. Why would it be "scientifically quite illegitimate" to simply
look at the data first, to see if the phenonomon occurred. If it did not,
then it would seem perfectly natural to me were you to speculate on why that
negative result had happened. You could do that quickly and easily,
without depending on elaborate analyses to give you "suggestions." :slight_smile:

But so far, Bill has not responded (in
any postings that have arrived here) to my question as to whether
the tracking errors on the faster disturbances might possibly be the
higher frequency part of the disturbance, which is more poorly controlled
than the lower frequency components. If it is not trivially easy to see
that, then what use is it to even look at the tracks without specific
guidance as to what to seek?

Well, for thing, do the tracks of mouse and cursor positions contain
"higher frequency components," or "lower frequency components," or both o
fthem, or do we simply take it on faith, or on assertion, that those things
exist in every set of data?

No. I think it is much more defensible to analyze according to the model
suggested, and THEN to look to the graphics for clues that might help to
see what kinds of deviations there might be.

Theory first, data second. That *is* a new approach to science! :wink:

Later,

Tom