NONMEM Users Network Archive

Hosted by Cognigen

Re: What does convergence/covariance show?

From: Leonid Gibiansky <LGibiansky>
Date: Tue, 25 Aug 2009 00:15:38 -0400

Nick,
Concerning "random stops at arbitrary point with arbitrary error" I was
referring to your statement: "NONMEM VI will fail to converge or not
complete the covariance step more or less at random"

For OFV, you did not tell the entire story. If you would look only on
OF, you would go for the absolute minimum of OF. If you ignore small
changes, it means that you use some other diagnostic to (possibly)
select a model with higher OFV (if the difference is not too high,
within 5-10-20 units), preferring that model based on other signs
(convergence? plots? number of parameters?). This is exactly what I was
referring to when I mentioned that OF is just one of the criteria.

One common example where OF is not the best guide is the modeling of
absorption. You can spend weeks building progressively more and more
complicated models of absorptions profiles (with parallel, sequential,
time-dependent, M-time-modeled absorption etc.) with large drop in OF
(that corresponds to minor improvement for a few patients), with no gain
in predictive power of your primary parameters of interest, for example,
  steady-state exposure.

To provide example of the bootstrap plot, I put it here:

http://quantpharm.com/pdf_files/example.pdf

For 1000 bootstrap problems, parameter estimates were plotted versus
parameter estimates. You can immediately see that SLOP and EC50 are
strongly correlated while all other parameters are not correlated. CI
and even correlation coefficient value do not tell the whole story about
the model. You can get similar results from the covariance-step
correlation matrix of parameter estimates but it requires simulations to
visualize it as clearly as from bootstrap results. Advantage of
bootstrap plots is that one can easily study correlations and
variability of not only primary parameters (such as theta, omega, etc),
but also relations between derived parameters.

Leonid

--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566




Nick Holford wrote:
> Leonid,
>
> I do not experience "random stops at arbitrary point with arbitrary
> error" so I don't understand what your problem is.
>
> The objective function is the primary metric of goodness of fit. I agree
> it is possible to get drops in objective function that are associated
> with unreasonable parameter estimates (typically an OMEGA estimate). But
> I look at the parameter estimates after each run so that I can detect
> this kind of problem. Part of the display of the parameter estimates is
> the correlation of random effects if I am using OMEGA BLOCK. This is
> also a weaker secondary tool. By exploring different models I can get a
> feel for which parts of the model are informative and which are not by
> looking at the change in OBJ. Small (5-10) changes in OBJ are not of
> much interest. A change of OBJ of at least 50 is usually needed to
> detect anything of practical importance.
>
> I don't understand what you find of interest in the correlation of
> bootstrap parameter estimates. This is really nothing more than you
> would get from looking at the correlation matrix of the estimate from
> the covariance step. High estimation correlations point to poor
> estimability of the parameters but I think they are not very helpful for
> pointing to ways to improve the model.
>
> Nevertheless I can agree to disagree on our modelling art :-)
>
> Nick
>
> Leonid Gibiansky wrote:
>> Nick,
>>
>> I think it is dangerous to rely heavily on the objective function (let
>> alone on ONLY objective function) in the model development process. I
>> am very surprised that you use it as the main diagnostic. If you think
>> that nonmem randomly stops at arbitrary point with arbitrary error,
>> how can you rely on the result of this random process as the main
>> guide in the model development? I pay attention to the OF but only as
>> one of the large toolbox of other diagnostics (most of them graphics).
>> I routinely see examples when over-parametrized unstable models
>> provide better objective function values, but this is not a sufficient
>> reason to select those. If you reject them in favor of simpler and
>> more stable models, you would see less random stops and more models
>> with convergence and successful covariance steps.
>>
>> Even with bootstrap, I see the main real output of this procedure in
>> revealing the correlation of the parameter estimates rather then in
>> computation of CI. CI are less informative, while visualization of
>> correlations may suggest ways to improve the model.
>>
>> Any way, it looks like there are at least the same number of modeling
>> methods as modelers: fortunately for all of us, this is still art, not
>> science; therefore, the time when everything will be done by the
>> computers is not too close.
>>
>> Leonid
>>
>> --------------------------------------
>> Leonid Gibiansky, Ph.D.
>> President, QuantPharm LLC
>> web: www.quantpharm.com
>> e-mail: LGibiansky at quantpharm.com
>> tel: (301) 767 5566
>>
>>
>>
>>
>> Nick Holford wrote:
>>> Mats, Leonid,
>>>
>>> Thanks for your definitions. I think I prefer that provided by Mats
>>> but he doesn't say what his test for goodness-of-fit might be.
>>>
>>> Leonid already assumes that convergence/covariance are diagnostic so
>>> it doesnt help at all with an independent definition of
>>> overparameterization. Correlation of random effects is often a very
>>> important part of a model -- especially for future predictions -- so
>>> I dont see that as a useful test -- unless you restrict it to
>>> pathological values eg. |correlation|>0.9?. Even with very high
>>> correlations I sometimes leave them in the model because setting the
>>> covariance to zero often makes quite a big worsening of the OBJ.
>>>
>>> My own view is that "overparameterization" is not a black and white
>>> entity. Parameters can be estimated with decreasing degrees of
>>> confidence depending on many things such as the design and the
>>> adequacy of the model. Parameter confidence intervals (preferably by
>>> bootstrap) are the way i would evaluate how well parameters are
>>> estimated. I usually rely on OBJ changes alone during model
>>> development with a VPC and boostrap confidence interval when I seem
>>> to have extracted all I can from the data. The VPC and CIs may well
>>> prompt further model development and the cycle continues.
>>>
>>> Nick
>>>
>>>
>>> Leonid Gibiansky wrote:
>>>> Hi Nick,
>>>>
>>>> I am not sure how you build the models but I am using convergence,
>>>> relative standard errors, correlation matrix of parameter estimates
>>>> (reported by the covariance step), and correlation of random effects
>>>> quite extensively when I decide whether I need extra compartments,
>>>> extra random effects, nonlinearity in the model, etc. For me they
>>>> are very useful as diagnostic of over-parameterization. This is the
>>>> direct evidence (proof?) that they are useful :)
>>>>
>>>> For new modelers who are just starting to learn how to do it, or
>>>> have limited experience, or have problems on the way, I would advise
>>>> to pay careful attention to these issues since they often help me to
>>>> detect problems. You seem to disagree with me; that is fine, I am
>>>> not trying to impose on you or anybody else my way of doing the
>>>> analysis. This is just an advise: you (and others) are free to use
>>>> it or ignore it :)
>>>>
>>>> Thanks
>>>> Leonid
>>>
>>>
>>> Mats Karlsson wrote:
>>>> <<I would say that if you can remove parameters/model components
>>>> without
>>>> detriment to goodness-of-fit then the model is overparameterized. >>
>>>>
>>>
>
Received on Tue Aug 25 2009 - 00:15:38 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.