NONMEM Users Network Archive

Hosted by Cognigen

FW: General question on modeling

From: James G Wright <james>
Date: Tue, 20 Mar 2007 14:56:12 -0000

Mark,

I think we need to make a distinction between scientific investigation
and an experiment. An individual experiment should be reproducible, and
our equivalent is the estimation of a given model on a given dataset.
The process of scientific investigation varies substantially among
investigators in any scientific field. I am not optimistic that
scientific research (which implicitly includes the generation of
hypotheses, which are partially synonymous with models) can ever be
reduced to an algorithm.

Best regards,

James G Wright PhD
Scientist
Wright Dose Ltd
Tel: 44 (0) 772 5636914
www.wright-dose.com


-----Original Message-----
From: owner-nmusers
On Behalf Of Mark Sale - Next Level Solutions
Sent: 20 March 2007 13:10
Cc: nmusers
Subject: RE: [NMusers] General question on modeling


Pete,
  Beg to differ, but ...
 In all other sciences being able to independently reproduce results is
the hallmark of a valid piece of work. (remember cold fusion?, not one
else could reproduce it, invalid, then there was angiogenic factors, no
one else could reproduce (for a long time), then when Folkman showed
people how, it was valid). Why are we so special that it is OK for the
same experiement to give different results- even different conclusions,
and both are valid? It think this is just more than differences in
interpreting data - it's like two people do a T test and get different
answers. It that happens, we need to question whether the T test is a
valid method. But, I agree that covariates are a fairly trivial
contributor to explaining variability. The biggest contributor to
variability is time (high concentration just after dose, low long after
dose). So, usually it is the structural model that drives pretty much
everything. It matters more if you chose an Emax over an indirect
response for your PD model than whether you put age as a predictor of
Emax.

Mark

Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com


> -------- Original Message --------
> Subject: [NMusers] General question on modeling
> From: "Bonate, Peter" <Peter.Bonate
> Date: Tue, March 20, 2007 8:20 am
> To: <nmusers
>
> Sometimes these threads kill me. There is a degree of art to
> modeling. The art is the intangible things that we do during model
> development. If there was no art, if it was all based on science, then

> all modelers would be equal and two modelers would always come to the
> same model. The fact that we don't is the uniqueness of the process
> and therein lies the art.
>
> I would also like to argue that for most drugs, covariate inclusion in
> a model often reduces BSV and residual variability by very little.
> There are very few magic bullet covariates like GFR with
> aminoglycosides. I would think that if two experienced modelers
> analyzed the same data set and came up with different models that if
> we were to examine these models we would find they probably would have

> similar predictive performance. A classic example of this is when you

> do all possible regressions with a multiple linear regression model.
>
> Pete bonate
> Peter Bonate, PhD, FCP
>
> -----Original Message-----
> From: owner-nmusers
> To: 'Mark Sale - Next Level Solutions' <mark
> CC: nmusers
> Sent: Mon Mar 19 19:42:18 2007
> Subject: RE: [NMusers] General question on modeling
>
> Mark
>
> > But, I have to admit that I'm uncomfortable with the concept of the
> > "art" of modeling.
>
> I agree - I like to think of it as a science of modelling - but I have
> heard (at conferences) the "science" of modelling referred to as the
> "art" of modelling.
>
> > decisions on art? Shouldn't we be striving for something more
> > objective than art?
>
> We have that now. The model should perform well in the area that it's
> supposed to. There are a number of diagnostic and evaluation
> techniques that one can use to ask the question "Is my model any good
> for the purpose for which I built it?". I think the underlying
> concept of striving for a
> single method for building models is inherently flawed.
>
> > If this is art, how do we deal with
> > the reality that two modelers will get different answers (I
> > know,... neither of which is right), but in the end we do
> > need to recommend only one dosing regimen.
>
> By different answers - are you referring to different models? In
> which case the models would presumably be sufficiently confluent that
> their predictions
> of the substantive inference (e.g. dosing regimen) would be the same
or
> at
> least very similar (to within an acceptable dose size).
>
> IMHO, a mistake is made in drug development when we try and find the
> best single model at every stage of the process. Why not have a
> selection of plausible models which all provide essentially the same
> inferences. In this
> case when we design the next study our design will incorporate a
> quantitative measure of our uncertainty in the model, rather than just
> saying - "this is the model and that's that".
>
> > You suggest (I think) that we should select our model based on
> > what inference we want to examine. I agree. But that is not the
> > question either. There are volumes written about how to identify
> > the best/better model once you've found it. I'm interest in how we
> > find it.
>
> This is my point exactly - I don't believe there is an absolute,
> linear method available for finding the best model within the
> framework of hierarchical nonlinear models (there - I've said it).
>
> Steve
> --

Received on Tue Mar 20 2007 - 10:56:12 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.