NONMEM Users Network Archive

Hosted by Cognigen

RE: What does convergence/covariance show?

From: Hu, Chuanpu [CNTUS] <CHu25>
Date: Tue, 25 Aug 2009 16:39:10 -0400

We have conducted simulations to show that an over-parameterized model,
even if "true" and "significant," could give worse predictions (ref
below). The simulations were conducted perhaps more like in the context
of interpolations. What happens in extrapolation will be very much
depend on the specific situation. However this suggests that the
empirical model may deserve to be given more consideration.

Reference: Hu C, Dong Y., Estimating the predictive quality of
dose-response after model selection. Statistics in Medicine 2007;
26:3114-3139.

Chuanpu

~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
Chuanpu Hu, Ph.D.
Director, Pharmacometrics
Pharmacokinetics
C-3-3
Biotechnology, Immunology & Oncology (B.I.O.)
Johnson and Johnson
200 Great Valley Parkway
Malvern, PA 19355
Tel: 610-651-7423
Fax: (610) 993-7801
E-mail: CHu25
~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*



-----Original Message-----
From: owner-nmusers
On Behalf Of Leonid Gibiansky
Sent: Tuesday, August 25, 2009 2:59 PM
To: Michael.J.Fossler
Cc: Mark Sale - Next Level Solutions; 'nmusers';
owner-nmusers
Subject: Re: [NMusers] What does convergence/covariance show?

Mike,
Your ground is only as firm as your assumptions unless data can add
something useful. If you believe in your assumptions, then postulate
Emax model, and FIX Emax value: you will end up with the well-defined
model. Or put informative prior on this value and use Bayesian. Both
methods are acceptable. What is not correct, in my opinion, is to accept

Emax value estimated from the dataset that does not have sufficient
information to estimate it.
Leonid


--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566




Michael.J.Fossler
>
> I disagree - it's the same topic. If you have a dataset where, because

> of study limitations, Emax can not be estimated, then you have two
> choices. Either fit a linear model, knowing that it is
pharmacologically
> wrong and close to useless for anything other than interpolation
within
> the limits of your data, (which can be useful, no doubt). Or you can
> recognize the limitations of the data and postulate an Emax based
either
> on 1) prior knowledge or 2) pharmacological principles.
>
> I think you are on very firm ground with the second choice, and as
long
> as you are up-front with your assumptions, this approach is very
useful.
>
>
>
> *"Leonid Gibiansky" <LGibiansky
>
> 25-Aug-2009 14:39
>
>
> To
> Michael.J.Fossler
> cc
> "Mark Sale - Next Level Solutions" <mark
> "'nmusers'" <nmusers
> Subject
> Re: [NMusers] What does convergence/covariance show?
>
>
>
>
>
>
>
>
> Mike,
> This is an entirely different topic how to use prior knowledge. There
> exist a number of ways (e.g., Bayesian analysis or fixing a parameter
> based on prior knowledge) how to do it properly, without relying on
the
> estimates from the over-parametrized models.
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Michael.J.Fossler
> >
> > Leonid, this is not necessarily true. You may have data that can't
be
> > used to directly add to the model (e.g., another compound with a
similar
> > mechanism of action). Or, you may be able to postulate a plausible
Emax
> > based on reasonable biologic limits. In any case, putting on a
> > reasonable limit based on biology and pharmacology does not
guarantee
> > that you are "correct" (whatever that means) but it puts you on
firmer
> > ground than using a structural model (linear PD) that you know
can't
> > possibly be correct, and that you can not use for extrapolation,
which
> > (as Mark and Jeff point out) is the usual reason we do this stuff.
> >
> >
> >
> >
> >
> > *"Leonid Gibiansky" <LGibiansky
> > Sent by: owner-nmusers
> >
> > 25-Aug-2009 14:03
> >
> >
> > To
> > "Mark Sale - Next Level Solutions"
> <mark
> > cc
> > "'nmusers'" <nmusers
> > Subject
> > Re: [NMusers] What does convergence/covariance
show?
> >
> >
> >
> >
> >
> >
> >
> >
> > Mark,
> >
> > This is rather weak defense. If you have data to support the model,
you
> > can use it to build the mechanistic model. If the data do not
support
> > the model, there is nothing convincing that you can do except to
say
> > that the model is (just an example) linear in the interval D1-D2,
and
> > unknown
> > when D > D2 . Anything in excess of this simple statement will be
either
> > speculation or unrelated to the particular data in hand (prior
> > knowledge). With the linear model, you will be correct in the D1-D2
> > range, and will not go into the D > D2 range. With the nonlinear
model,
> > you will be correct in the range D1-D2 (same as with linear model),
and
> > you will be nobody-knows-correct-or-wrong with your wild guess of
the
> > nonlinear model. So this "more mechanistic" approach would be just
a
> > guess expressed in terms of the equation.
> >
> > I also do not think that this is a stock market, where "risky" is
an
> > appropriate term. You probably mean "uncertain" ? or unreliable
(read:
> > with large standard errors?).
> >
> > Leonid
> >
> > --------------------------------------
> > Leonid Gibiansky, Ph.D.
> > President, QuantPharm LLC
> > web: www.quantpharm.com
> > e-mail: LGibiansky at quantpharm.com
> > tel: (301) 767 5566
> >
> >
> >
> >
> > Mark Sale - Next Level Solutions wrote:
> > >
> > > Ken,
> > > In defense of the mechanistic modeler:
> > >
> > > I suspect that generally what we want to do with models is
> extrapolate.
> > > That is, predict how people who are older, younger, larger,
> smaller, on
> > > drug longer, on higher doses, have interacting meds, 2D6
deficiency,
> > > other disease etc will behave. Predicting data within the
range of
> > > what you've studied isn't really all that interesting, and can,
> for the
> > > most part be left to traditional statistics - and falls into the

> "stamp
> > > collecting" category from Rutherford (another good Kiwi I
believe).
> > > That, I think is an important difference between hypothesis
testing
> > > (which is very important) and modeling/estimation (which is a
lot more
> > > interesting, and inherently, more risky)
> > > So, if you model a linear relationship because that is all the
> range of
> > > your data will support (even though you know linear
relationships are
> > > very rare in biology) you've essentially precluded any
opportunity to
> > > extrapolate beyond your data. If you do so, you will certainly
be
> > > wrong. Your model is well supported, not risky, but not very
> > > interesting. Imposing an Emax (or other biologically plausible)
model
> > > will result in you being wrong sometimes (as opposed to always
wrong
> > > with the linear model).
> > > But, we must always make the "customer" aware of the limitations

> of the
> > > analysis - some guess at the chances of it being very wrong.
> > >
> > > Bottom line - if we want to say something interesting, more
> interesting
> > > that traditional statistics, we will need to take risks with
less than
> > > optimally supported mechanistic models.
> > >
> > >
> > >
> > >
> > >
> > >
> > > Mark Sale MD
> > > Next Level Solutions, LLC
> > > www.NextLevelSolns.com <http://www.NextLevelSolns.com>
> > > 919-846-9185
> > >
> > > -------- Original Message --------
> > > Subject: RE: [NMusers] What does convergence/covariance
show?
> > > From: "Ken Kowalski" <ken.kowalski
> > > Date: Tue, August 25, 2009 12:03 pm
> > > To: "'nmusers'" <nmusers
> > >
> > > Nick,
> > >
> > > It sounds like you do recognize that models are often
> > > over-parameterized by
> > > your statements:
> > >
> > > " It is quite common to find that the
> > > estimates EC50 and Emax are highly correlated (I assume
> > > SLOP=EMAX/EC50).
> > > It would also be common to find that the random effects of
> EMAX and
> > > EC50
> > > are also correlated. That is expected given the limitations
of
> most
> > > pharmacodynamic designs."
> > >
> > >
> > > When EC50 and Emax are highly correlated I think you will
find
> that a
> > > simplified linear model will fit the data just as well with
no
> real
> > > impact
> > > on goodness-of-fit (e.g., OFV). If we only observe
concentrations
> > in the
> > > linear range of an Emax curve because of a poor design then
it
> is no
> > > surprise that a linear model may perform as well as an Emax
model
> > > within the
> > > range of our data. If the design is so poor in information
content
> > > regarding the Emax relationship because of too narrow a
range of
> > > concentrations this will indeed lead to convergence and COV
step
> > > failures in
> > > fitting the Emax model.
> > >
> > > Your statement that you would be unwilling to accept the
linear
> > model in
> > > this setting really speaks to the plight of the mechanistic
> modeler.
> > > It is
> > > important to note that an over-parameterized model does not
mean
> > > that the
> > > model is miss-specified. A model can be correctly specified
but
> > still be
> > > over-parameterized because the data/design simply will not
support
> > > estimation of all the parameters in the correctly specified
> > model. The
> > > mechanistic modeler who has a strong biological prior
favoring
> > the more
> > > complex model is reluctant to accept a simplified model that

> he/she
> > > knows
> > > has to be wrong (e.g., we would not expect that the linear
model
> > > would hold
> > > up at considerably higher concentrations than those observed

> in the
> > > existing
> > > data). The problem with accepting the more complex model in
this
> > > setting is
> > > that we can't really trust the estimates we get (when the
> model has
> > > convergence difficulties and COV step failures as a result
of
> > > over-parameterization) because there may be an infinite set
of
> > > solutions to
> > > the parameters that give the same goodness-of-fit (i.e., a
> very flat
> > > likelihood surface). You can do all the bootstrapping you
want
> > but it is
> > > not a panacea for the deficiencies of a poor design.
> > >
> > > While I like to fit mechanistic models just as much as the
> next guy,
> > > I also
> > > like my models to be stable (not over-parameterized). In
this
> > > setting, the
> > > pragmatist in me would accept the simpler model, acknowledge
the
> > > limitations
> > > of the design and model, and I would be very cautious not to
> > > extrapolate my
> > > model too far from the range of my existing data. More
> importantly,
> > > I would
> > > advocate improving the situation by designing a better study

> so that
> > > we can
> > > get the information we need to support a more appropriate
> model that
> > > will
> > > put us in a better position to extrapolate to new
experimental
> > > conditions.
> > > We review the COV step output (looking for high correlations

> such as
> > > between
> > > the estimates of EC50 and Emax) and fit simpler models not
because
> > > we prefer
> > > simpler models per se, but because we want to fully
understand the
> > > limitations of our design. Of course this simple example of
a poor
> > > design
> > > with too narrow a concentration and/or dose range to
estimate the
> > Emax
> > > relationship can be easily uncovered in a simple plot of the
data,
> > > however,
> > > for more complex models the nature of the
over-parameterization
> > and the
> > > limitations of the design can be harder to detect which is
why we
> > need a
> > > variety of strategies and diagnostics including plots, COV
step
> > output,
> > > fitting alternative simpler models, etc. to fully understand
these
> > > limitations.
> > >
> > > Just my 2 cents. :)
> > >
> > > Ken
> > >
> > > -----Original Message-----
> > > From: owner-nmusers
> > > [mailto:owner-nmusers**
> > > <http://email01.secureserver.net/pcompose.php#Compose>] On
> > > Behalf Of Nick Holford
> > > Sent: Tuesday, August 25, 2009 1:09 AM
> > > To: nmusers
> > > Subject: Re: [NMusers] What does convergence/covariance
show?
> > >
> > > Leonid,
> > >
> > > I did not say NONMEM stops at random. Whether or not the
stopping
> > point
> > > is associated with convergence or a successful covariance
step
> > appears
> > > to be at random. The parameter values at the stopping point
will
> > > typically be negligibly different. Thus the stopping point
is
> not at
> > > random. You can easily observe this in your bootstrap runs.
> > Compare the
> > > parameter distribution for runs that converge with those
that
> > dont and
> > > you will find there are negligible differences in the
> distributions.
> > >
> > > I did not say that I ignore small changes in OFV but my
> decisions are
> > > guided by the size of the change.
> > >
> > > I do not waste much time modelling absorption. It rarely is
of any
> > > relevance to try to fit all the small details.
> > >
> > > I dont see anything in the plot of SLOP vs EC50 that is not
> > revealed by
> > > R=0.93. If the covariance step ran you would see a similar =

> number in
> > > the
> > > correlation matrix of the estimate. It is quite common to
find
> > that the
> > > estimates EC50 and Emax are highly correlated (I assume
> > > SLOP=EMAX/EC50).
> > > It would also be common to find that the random effects of
> EMAX and
> > > EC50
> > > are also correlated. That is expected given the limitations
of
> most
> > > pharmacodynamic designs. However, I would not simplify the
> model to a
> > > linear model just because of these correlations. I would pay
much
> > more
> > > attention to the change in OFV comparing an Emax with a
linear
> model
> > > plus whatever was known about the studied concentration
range and
> > > the EC50.
> > >
> > > I do agree that bootstraps can be helpful for calculating
CIs on
> > > secondary parameters.
> > >
> > > Nick
> > >
> > > Leonid Gibiansky wrote:
> > > > Nick,
> > > > Concerning "random stops at arbitrary point with
arbitrary
> > error" I
> > > > was referring to your statement: "NONMEM VI will fail to
> > converge or
> > > > not complete the covariance step more or less at random"
> > > >
> > > > For OFV, you did not tell the entire story. If you would
look
> > only on
> > > > OF, you would go for the absolute minimum of OF. If you
ignore
> > small
> > > > changes, it means that you use some other diagnostic to
> (possibly)
> > > > select a model with higher OFV (if the difference is not
> too high,
> > > > within 5-10-20 units), preferring that model based on
other
> signs
> > > > (convergence? plots? number of parameters?). This is
exactly
> > what I
> > > > was referring to when I mentioned that OF is just one of
the
> > criteria.
> > > >
> > > > One common example where OF is not the best guide is the
> > modeling of
> > > > absorption. You can spend weeks building progressively
more
> > and more
> > > > complicated models of absorptions profiles (with
parallel,
> > > sequential,
> > > > time-dependent, M-time-modeled absorption etc.) with
large
> > drop in OF
> > > > (that corresponds to minor improvement for a few
patients),
> > with no
> > > > gain in predictive power of your primary parameters of
> > interest, for
> > > > example, steady-state exposure.
> > > >
> > > > To provide example of the bootstrap plot, I put it here:
> > > >
> > > > http://quantpharm.com/pdf_files/example.pdf
> > > >
> > > > For 1000 bootstrap problems, parameter estimates were
plotted
> > versus
> > > > parameter estimates. You can immediately see that SLOP
and
> > EC50 are
> > > > strongly correlated while all other parameters are not
> > correlated. CI
> > > > and even correlation coefficient value do not tell the
> whole story
> > > > about the model. You can get similar results from the
> > covariance-step
> > > > correlation matrix of parameter estimates but it requires
> > simulations
> > > > to visualize it as clearly as from bootstrap results.
> Advantage of
> > > > bootstrap plots is that one can easily study correlations
and
> > > > variability of not only primary parameters (such as
theta,
> omega,
> > > > etc), but also relations between derived parameters.
> > > >
> > > > Leonid
> > > >
> > > > --------------------------------------
> > > > Leonid Gibiansky, Ph.D.
> > > > President, QuantPharm LLC
> > > > web: www.quantpharm.com <http://www.quantpharm.com>
> > > > e-mail: LGibiansky at quantpharm.com
> > > > tel: (301) 767 5566
> > > >
> > > >
> > > >
> > > >
> > > > Nick Holford wrote:
> > > > > Leonid,
> > > > >
> > > > > I do not experience "random stops at arbitrary point
with
> > arbitrary
> > > > > error" so I don't understand what your problem is.
> > > > >
> > > > > The objective function is the primary metric of goodness
of
> > fit. I
> > > > > agree it is possible to get drops in objective function
> that are
> > > > > associated with unreasonable parameter estimates
(typically
> > an OMEGA
> > > > > estimate). But I look at the parameter estimates after
each
> > run so
> > > > > that I can detect this kind of problem. Part of the
display
> > of the
> > > > > parameter estimates is the correlation of random effects

> if I am
> > > > > using OMEGA BLOCK. This is also a weaker secondary tool.
By
> > > exploring
> > > > > different models I can get a feel for which parts of the
> > model are
> > > > > informative and which are not by looking at the change
in
> > OBJ. Small
> > > > > (5-10) changes in OBJ are not of much interest. A change

> of OBJ
> > > of at
> > > > > least 50 is usually needed to detect anything of
practical
> > > importance.
> > > > >
> > > > > I don't understand what you find of interest in the
> > correlation of
> > > > > bootstrap parameter estimates. This is really nothing
more
> > than you
> > > > > would get from looking at the correlation matrix of the
> estimate
> > > from
> > > > > the covariance step. High estimation correlations point
to
> poor
> > > > > estimability of the parameters but I think they are not
very
> > helpful
> > > > > for pointing to ways to improve the model.
> > > > >
> > > > > Nevertheless I can agree to disagree on our modelling
art :-)
> > > > >
> > > > > Nick
> > > > >
> > > > > Leonid Gibiansky wrote:
> > > > >> Nick,
> > > > >>
> > > > >> I think it is dangerous to rely heavily on the
objective
> > function
> > > > >> (let alone on ONLY objective function) in the model
> development
> > > > >> process. I am very surprised that you use it as the
main
> > > diagnostic.
> > > > >> If you think that nonmem randomly stops at arbitrary
> point with
> > > > >> arbitrary error, how can you rely on the result of this

> random
> > > > >> process as the main guide in the model development? I
pay
> > attention
> > > > >> to the OF but only as one of the large toolbox of other
> > diagnostics
> > > > >> (most of them graphics). I routinely see examples when
> > > > >> over-parametrized unstable models provide better
objective
> > function
> > > > >> values, but this is not a sufficient reason to select
those.
> > If you
> > > > >> reject them in favor of simpler and more stable models,
you
> > would
> > > > >> see less random stops and more models with convergence
and
> > > > >> successful covariance steps.
> > > > >>
> > > > >> Even with bootstrap, I see the main real output of this
> > > procedure in
> > > > >> revealing the correlation of the parameter estimates
rather
> > then in
> > > > >> computation of CI. CI are less informative, while
> > visualization of
> > > > >> correlations may suggest ways to improve the model.
> > > > >>
> > > > >> Any way, it looks like there are at least the same
number of
> > > > >> modeling methods as modelers: fortunately for all of
us,
> this is
> > > > >> still art, not science; therefore, the time when
everything
> > will be
> > > > >> done by the computers is not too close.
> > > > >>
> > > > >> Leonid
> > > > >>
> > > > >> --------------------------------------
> > > > >> Leonid Gibiansky, Ph.D.
> > > > >> President, QuantPharm LLC
> > > > >> web: www.quantpharm.com <http://www.quantpharm.com>
> > > > >> e-mail: LGibiansky at quantpharm.com
> > > > >> tel: (301) 767 5566
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> Nick Holford wrote:
> > > > >>> Mats, Leonid,
> > > > >>>
> > > > >>> Thanks for your definitions. I think I prefer that
> provided by
> > > Mats
> > > > >>> but he doesn't say what his test for goodness-of-fit
> might be.
> > > > >>>
> > > > >>> Leonid already assumes that convergence/covariance are
> > diagnostic
> > > > >>> so it doesnt help at all with an independent
definition of
> > > > >>> overparameterization. Correlation of random effects is

> often a
> > > very
> > > > >>> important part of a model -- especially for future
> > predictions --
> > > > >>> so I dont see that as a useful test -- unless you
restrict
> > it to
> > > > >>> pathological values eg. |correlation|>0.9?. Even with
> very high
> > > > >>> correlations I sometimes leave them in the model
because
> > setting
> > > > >>> the covariance to zero often makes quite a big
worsening
> of the
> > > OBJ.
> > > > >>>
> > > > >>> My own view is that "overparameterization" is not a
> black and
> > > white
> > > > >>> entity. Parameters can be estimated with decreasing
> degrees of
> > > > >>> confidence depending on many things such as the design

> and the
> > > > >>> adequacy of the model. Parameter confidence intervals
> > (preferably
> > > > >>> by bootstrap) are the way i would evaluate how well
> > parameters are
> > > > >>> estimated. I usually rely on OBJ changes alone during
model
> > > > >>> development with a VPC and boostrap confidence
interval
> when I
> > > seem
> > > > >>> to have extracted all I can from the data. The VPC and

> CIs may
> > > well
> > > > >>> prompt further model development and the cycle
continues.
> > > > >>>
> > > > >>> Nick
> > > > >>>
> > > > >>>
> > > > >>> Leonid Gibiansky wrote:
> > > > >>>> Hi Nick,
> > > > >>>>
> > > > >>>> I am not sure how you build the models but I am using
> > > convergence,
> > > > >>>> relative standard errors, correlation matrix of
parameter
> > > > >>>> estimates (reported by the covariance step), and
> > correlation of
> > > > >>>> random effects quite extensively when I decide
whether
> I need
> > > > >>>> extra compartments, extra random effects,
nonlinearity
> in the
> > > > >>>> model, etc. For me they are very useful as diagnostic
of
> > > > >>>> over-parameterization. This is the direct evidence
> > (proof?) that
> > > > >>>> they are useful :)
> > > > >>>>
> > > > >>>> For new modelers who are just starting to learn how
to do
> > it, or
> > > > >>>> have limited experience, or have problems on the way,
I
> would
> > > > >>>> advise to pay careful attention to these issues since
they
> > often
> > > > >>>> help me to detect problems. You seem to disagree with
me;
> > that is
> > > > >>>> fine, I am not trying to impose on you or anybody
else my
> > way of
> > > > >>>> doing the analysis. This is just an advise: you (and
> > others) are
> > > > >>>> free to use it or ignore it :)
> > > > >>>>
> > > > >>>> Thanks
> > > > >>>> Leonid
> > > > >>>
> > > > >>>
> > > > >>> Mats Karlsson wrote:
> > > > >>>> <<I would say that if you can remove parameters/model
> > components
> > > > >>>> without
> > > > >>>> detriment to goodness-of-fit then the model is
> > > overparameterized. >>
> > > > >>>>
> > > > >>>
> > > > >
> > >
> > > --
> > > Nick Holford, Professor Clinical Pharmacology
> > > Dept Pharmacology & Clinical Pharmacology
> > > University of Auckland, 85 Park Rd, Private Bag 92019,
> Auckland, New
> > > Zealand
> > > n.holford
fax:+64(9)373-7090
> > > mobile: +64 21 46 23 53
> > > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
> > >
> > >
> >
> >
> >
>
>
Received on Tue Aug 25 2009 - 16:39:10 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.