Multidimensional modelling

I haven't seen any posts for a couple of days. Is the list just quiet or has my ISP once again starting filtering out messages from lists?

This note from Nature on-line today seems to have some relevance to the modelling and simulation studies of PCT structure possibilities. They compared millions of potential structures with a wide range of parameter values. I've spent months looking at just three or four structures to model the sleep studies. Obviously there are algorithmic ways to do this experimental anipulation. Perhaps someone on the list is enough of a programmer to be able to use some related approach.

Martin

---------------Nature article follows-------------
Nature 450, 5 (1 November 2007) | doi:10.1038/450005a; Published online
31 October 2007

Journal club
James E. Ferrell

A systems biologist encourages modelling by the millions.

In a typical modelling study, we write down equations, solve them, and
see whether they account for known data. If they do, we claim to
understand some bit of biology. One huge caveat is that many other
models might have matched the data just as well.

Researchers from Peking University in Beijing and the University of
California, San Francisco, have devised a satisfying way of dealing with
this problem (W. Ma et al. Mol. Syst. Biol.2, 70; 2006).

Their starting point was epithelial patterning in the fruitfly
Drosophila. During embryogenesis, a system known as the 'segment
polarity network' generates repeating stripes of gene expression. The
stripes are initially fuzzy and later become sharp. Ma et al. set out to
see what simple gene circuits were best suited to this sharpening
process.

They formulated differential-equation models for about 14 million ways
of connecting two or three segmentation genes, then randomly chose 100
sets of parameters that defined the strength of the interactions for
each gene. They then carried out computations for each combination to
determine which of them converted fuzzy stripes into sharp ones.

Many topologies worked for at least one parameter set. But only a
fraction worked for more than one or two. Interestingly, the most robust
topologies were all variations on the same design - each had three
sub-circuits, one 'stripe generator' motif and two bistable 'response
sharpeners'. These findings give hope that complex networks may be
decomposed into modular sub-circuits with understandable functions.

Comprehensively examining millions of models is a lot of work, but is
not impossible. And, as Ma et al. show, it can yield important insight
that could not have been derived from studies of one or two.

[From Rick Marken (2007.11.01.1440)]

···

On 11/1/07, Martin Taylor <mmt-csg@rogers.com> wrote:

This note from Nature on-line today seems to have some relevance to
the modelling and simulation studies of PCT structure possibilities.
They compared millions of potential structures with a wide range of
parameter values.

As you know, this approach to testing models is not my cup of tea. The
approach I prefer is to develop experiments that will provide data
that will clearly distinguish between models.

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2007.11.01.17.58]

[From Rick Marken (2007.11.01.1440)]

This note from Nature on-line today seems to have some relevance to
the modelling and simulation studies of PCT structure possibilities.
They compared millions of potential structures with a wide range of
parameter values.

As you know, this approach to testing models is not my cup of tea. The
approach I prefer is to develop experiments that will provide data
that will clearly distinguish between models.

Yep. Me, too. I think we need both, though. I thought that the interesting thing about the note was that the different structures that were robust against minor parameter variation all had characteristics in common.

In my attempts at fitting the sleep-loss tracking data, I hand-varied the models, and only did serious parameter variation for four. I couldn't test the characteristics of the most robust kinds of structure, but in finding the best parameters for the four structures, I did find one thing in common: the fits are better if there's a tolerance region, meaning error has to go above some threshold before there's any output from the comparator. In a way, that's similar to the kind of approach they used, but it's far from the same thing.

Martin

···

On 11/1/07, Martin Taylor <mmt-csg@rogers.com> wrote:

[From Bill Powers (2007.11.02.0600 MDT)] –

Just a few minutes to dash off a post before breakfast on this fourth
very busy day. We have many strong allies here at the University of
Manchester. They are eager to learn more and are doing well at absorbing
PCT. Warren Mansell is a trememdous find.

Martin, one of the participants is a friend of yours, Phillip Farrel.
He’s an engineer now working in Ottowa. The problem with him is that I
can’t get more than halfway through an explanation before he understands,
so I don’t get to finish very often. A little like you.

Yesterday I gave a seminar at the U of M before about 50 people. The
reception was interested and warm, and I have committed to reviewing
several PhD theses that use PCT (review them for the students, not the
university).

Today we have a last day of papers, then Saturday sort of free-for-all,
and I come home Sunday. Whew.

I don’t think ,uch of the “millions of models” approach. It
sound suspiciously like giving up trying to understand. There was another
movement not long ago that involved building multilayer perceptrons by
the jillion to produce, via genetic algorithms, the ultimate Artificial
Intelligences of unknown organization, which would then proceed to take
over the world. Reminds me a bit of when I got a chemistry set, and being
too impatient to study the instructions, I just mixed a little of
everything together, hoping to get an explosion. I got a gooey mess,
which is the outcome I predict for this new idea.

Best,

Bill P.