From: Nick Holford <*n.holford*>

Date: Tue, 25 Aug 2009 13:37:19 +1200

Leonid,

I do not experience "random stops at arbitrary point with arbitrary

error" so I don't understand what your problem is.

The objective function is the primary metric of goodness of fit. I agree

it is possible to get drops in objective function that are associated

with unreasonable parameter estimates (typically an OMEGA estimate). But

I look at the parameter estimates after each run so that I can detect

this kind of problem. Part of the display of the parameter estimates is

the correlation of random effects if I am using OMEGA BLOCK. This is

also a weaker secondary tool. By exploring different models I can get a

feel for which parts of the model are informative and which are not by

looking at the change in OBJ. Small (5-10) changes in OBJ are not of

much interest. A change of OBJ of at least 50 is usually needed to

detect anything of practical importance.

I don't understand what you find of interest in the correlation of

bootstrap parameter estimates. This is really nothing more than you

would get from looking at the correlation matrix of the estimate from

the covariance step. High estimation correlations point to poor

estimability of the parameters but I think they are not very helpful for

pointing to ways to improve the model.

Nevertheless I can agree to disagree on our modelling art :-)

Nick

Leonid Gibiansky wrote:

*> Nick,
*

*>
*

*> I think it is dangerous to rely heavily on the objective function (let
*

*> alone on ONLY objective function) in the model development process. I
*

*> am very surprised that you use it as the main diagnostic. If you think
*

*> that nonmem randomly stops at arbitrary point with arbitrary error,
*

*> how can you rely on the result of this random process as the main
*

*> guide in the model development? I pay attention to the OF but only as
*

*> one of the large toolbox of other diagnostics (most of them graphics).
*

*> I routinely see examples when over-parametrized unstable models
*

*> provide better objective function values, but this is not a sufficient
*

*> reason to select those. If you reject them in favor of simpler and
*

*> more stable models, you would see less random stops and more models
*

*> with convergence and successful covariance steps.
*

*>
*

*> Even with bootstrap, I see the main real output of this procedure in
*

*> revealing the correlation of the parameter estimates rather then in
*

*> computation of CI. CI are less informative, while visualization of
*

*> correlations may suggest ways to improve the model.
*

*>
*

*> Any way, it looks like there are at least the same number of modeling
*

*> methods as modelers: fortunately for all of us, this is still art, not
*

*> science; therefore, the time when everything will be done by the
*

*> computers is not too close.
*

*>
*

*> Leonid
*

*>
*

*> --------------------------------------
*

*> Leonid Gibiansky, Ph.D.
*

*> President, QuantPharm LLC
*

*> web: www.quantpharm.com
*

*> e-mail: LGibiansky at quantpharm.com
*

*> tel: (301) 767 5566
*

*>
*

*>
*

*>
*

*>
*

*> Nick Holford wrote:
*

*>> Mats, Leonid,
*

*>>
*

*>> Thanks for your definitions. I think I prefer that provided by Mats
*

*>> but he doesn't say what his test for goodness-of-fit might be.
*

*>>
*

*>> Leonid already assumes that convergence/covariance are diagnostic so
*

*>> it doesnt help at all with an independent definition of
*

*>> overparameterization. Correlation of random effects is often a very
*

*>> important part of a model -- especially for future predictions -- so
*

*>> I dont see that as a useful test -- unless you restrict it to
*

*>> pathological values eg. |correlation|>0.9?. Even with very high
*

*>> correlations I sometimes leave them in the model because setting the
*

*>> covariance to zero often makes quite a big worsening of the OBJ.
*

*>>
*

*>> My own view is that "overparameterization" is not a black and white
*

*>> entity. Parameters can be estimated with decreasing degrees of
*

*>> confidence depending on many things such as the design and the
*

*>> adequacy of the model. Parameter confidence intervals (preferably by
*

*>> bootstrap) are the way i would evaluate how well parameters are
*

*>> estimated. I usually rely on OBJ changes alone during model
*

*>> development with a VPC and boostrap confidence interval when I seem
*

*>> to have extracted all I can from the data. The VPC and CIs may well
*

*>> prompt further model development and the cycle continues.
*

*>>
*

*>> Nick
*

*>>
*

*>>
*

*>> Leonid Gibiansky wrote:
*

*>>> Hi Nick,
*

*>>>
*

*>>> I am not sure how you build the models but I am using convergence,
*

*>>> relative standard errors, correlation matrix of parameter estimates
*

*>>> (reported by the covariance step), and correlation of random effects
*

*>>> quite extensively when I decide whether I need extra compartments,
*

*>>> extra random effects, nonlinearity in the model, etc. For me they
*

*>>> are very useful as diagnostic of over-parameterization. This is the
*

*>>> direct evidence (proof?) that they are useful :)
*

*>>>
*

*>>> For new modelers who are just starting to learn how to do it, or
*

*>>> have limited experience, or have problems on the way, I would advise
*

*>>> to pay careful attention to these issues since they often help me to
*

*>>> detect problems. You seem to disagree with me; that is fine, I am
*

*>>> not trying to impose on you or anybody else my way of doing the
*

*>>> analysis. This is just an advise: you (and others) are free to use
*

*>>> it or ignore it :)
*

*>>>
*

*>>> Thanks
*

*>>> Leonid
*

*>>
*

*>>
*

*>> Mats Karlsson wrote:
*

*>>> <<I would say that if you can remove parameters/model components
*

*>>> without
*

*>>> detriment to goodness-of-fit then the model is overparameterized. >>
*

*>>>
*

*>>
*

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

mobile: +64 21 46 23 53

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Mon Aug 24 2009 - 21:37:19 EDT

Date: Tue, 25 Aug 2009 13:37:19 +1200

Leonid,

I do not experience "random stops at arbitrary point with arbitrary

error" so I don't understand what your problem is.

The objective function is the primary metric of goodness of fit. I agree

it is possible to get drops in objective function that are associated

with unreasonable parameter estimates (typically an OMEGA estimate). But

I look at the parameter estimates after each run so that I can detect

this kind of problem. Part of the display of the parameter estimates is

the correlation of random effects if I am using OMEGA BLOCK. This is

also a weaker secondary tool. By exploring different models I can get a

feel for which parts of the model are informative and which are not by

looking at the change in OBJ. Small (5-10) changes in OBJ are not of

much interest. A change of OBJ of at least 50 is usually needed to

detect anything of practical importance.

I don't understand what you find of interest in the correlation of

bootstrap parameter estimates. This is really nothing more than you

would get from looking at the correlation matrix of the estimate from

the covariance step. High estimation correlations point to poor

estimability of the parameters but I think they are not very helpful for

pointing to ways to improve the model.

Nevertheless I can agree to disagree on our modelling art :-)

Nick

Leonid Gibiansky wrote:

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

mobile: +64 21 46 23 53

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Mon Aug 24 2009 - 21:37:19 EDT