NONMEM Users Network Archive

Hosted by Cognigen

Re: General question on modeling

From: Paul Hutson <prhutson>
Date: Mon, 19 Mar 2007 20:07:48 -0500
Joga Gobburu:
In the context of Nick, Mark, and Steve's comments, can you provide any insight to us about the FDA's current attitude, preferred methodolgy, or a reference for model construction and testing? Thanks!
Paul

Nick Holford wrote:
Mark, If we are talking about science then we are not talking about regulatory decision making. The criteria used for regulatory approval and labelling are based on pragmatism not science e.g. using intention to treat analysis (use effectiveness rather than method effectiveness), dividing a continuous variable like renal function into two categories for dose adjustment. This kind of pragmatism is more art than science because it does not correctly describe the drug properties (ITT typically underestimates the true effect size) nor rationally treat the patient with extreme renal function values. As Steve reminded us all models are wrong. The issue is not whether some ad hoc model building algorithm is correct or has the right type 1 error properties under some null that is largely irrelevant to the purpose. The issue is does the model work well enough to satisfy its purpose. Metrics of model performance should be used to decide if a model is adequate not a string of dubiously applied P values. The search process is up to you. I think from your knowledge of computer search methods you will appreciate that those methods that involve more randomness/wild jumps in the algorithm generally have a better chance of approaching a global minimum. IMHO the covariate search process is like the search for the Holy Grail. Its fundamentally a process for those with a religious belief that there is some special set of as yet unidentified covariates that will explain between subject variability. As a non believer I think that all the major leaps in explaining BSV comes from prior knowledge (weight, renal function, drug interactions, genetic polymorphisms) and none have been discovered by trying all the available covariates during a blind search. If you have a counter example then please let me know and tell me how much the BSV variance was reduced when this unsuspected covariate was added to a model with appropriate prior knowledge covariates. Nick Mark Sale - Next Level Solutions wrote:
Steve,
  I was pretty sure I'd get skewered for the suggestion that this was a
linear decision making process (please note the disclaimer in my
question).  Wasn't sure if it would be Nick or you.  As a devout
Bayesian, I certainly support the idea of letting prior knowledge (any
prior knowledge, not just knowledge of biology) drive the model
buildling, or at least the models that are considered justifiable.
But, I have to admit that I'm uncomfortable with the concept of the
"art" of modeling.  Beauty is, after all in the eye of the beholder,
and how can we possibly base regulatory decisions on art?  Shouldn't we
be striving for something more objective than art?  If this is art, how
do we deal with the reality that two modelers will get different
answers (I know,... neither of which is right), but in the end we do
need to recommend only one dosing regimen.  If I were taking the drug,
I'd like that decision based on science, not on art.  (although in the
19th centruy, tubercolis was refered to as "the beautiful death" -
maybe that is what you mean? ;-)  ).
  But, that is all off the subject, still not sure if there is any
rigorous justification for the way we build models, use of prior
knowledge not-with-standing.
  You suggest (I think) that we should select our model based on what
inference we want to examine.  I agree.  But that is not the question
either.  There are volumes written about how to identify the
best/better model once you've found it.  I'm interest in how we find
it.

Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com

    
-------- Original Message --------
Subject: RE: [NMusers] General question on modeling
From: Stephen Duffull 
        
   I've lately been reviewing the literature on model
building/selection algorithms.  I have been unable to find
any even remotely rigorous discussion of the way we all build
NONMEM models.  The structural first, then variances/forward
addition/backward elimination is generally mentioned in a
number of places
        
I sort of hope that there is no prescriptive approach to model building for
nonlinear mixed effects models since this would suggest that if you follow a
set recipe you will end up with a model that works everytime.

I'm sure everyone has anecdotes where a "nonlinear" approach to model
building worked best, e.g. adding covariates prior to completion of building
the structural PK model as is sometimes necessary to be able to build an
adequate structural model.

Surely the idea is to let the sciences of biological systems and statistics
inform the modeller on how to best go about making their model (I have even
heard some refer to this as the "art" of model building :-)  ).

Afterall if we believe that all models are wrong then all we really want
from our model is one that performs well for the inference we wish to draw
from it.

Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: www.winpopt.com
      

--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
  

--
Paul R

Paul R. Hutson, Pharm.D.

Associate Professor

UW School of Pharmacy

777 Highland Avenue

Madison WI 53705-2222

Tel  608.263.2496

Fax 608.265.5421

Pager 608.265.7000, p7856

Received on Mon Mar 19 2007 - 21:07:48 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.