NONMEM Users Network Archive

Hosted by Cognigen

RE: Linear VS LTBS

From: Mats Karlsson <mats.karlsson>
Date: Mon, 24 Aug 2009 04:01:43 +0200

Hi Nick,

Maybe Leonid's suggestion to agree to disagree was a good one but here we go
again :)
See below

Mats

Mats Karlsson, PhD
Professor of Pharmacometrics
Dept of Pharmaceutical Biosciences
Uppsala University
Box 591
751 24 Uppsala Sweden
phone: +46 18 4714105
fax: +46 18 471 4003


-----Original Message-----
From: owner-nmusers
Behalf Of Nick Holford
Sent: Monday, August 24, 2009 2:36 AM
To: nmusers
Subject: Re: [NMusers] Linear VS LTBS

Hi Mats,

I was wondering when you would join in this discussion :-)

Mats wrote:

> What kind of evidence did you have in mind?
I think it would be pretty hard to provide evidence for Leonid's
assertion that overparameterization is often the cause of
convergence/covariance failures.
<< I thought you asked Leonid for something that possible couldn't be
produced without spending a massive effort.

If one could investigate a large sample of models from typical users
that have had convergence/covariance probems then it should be possible
to determine which models are overparameterized and which are not. It
woud then be possible to confirm or deny the assertion that
overparameterization is "often" the cause of this kind of problem.

I think Leonid's assertion is simply speculation at this stage. It could
be true but there is no evidence for it. On the other hand I and others
have provided evidence that convergence/covariance failures are not a
sign of a poorly constructed model but are more likely due to defects in
NONMEM VI.
<<I think your assertion is speculation too. What has been shown by many of
us is that with bootstraps or simulation under the same model and same
design, convergence is not a reliable tool for detecting quality of
parameter estimates. That is far from showing its lack of value to detect
overestimation in other types of situations, most importantly model
building. Something I think would get closer to the usefulness of the
covariance step as a diagnostic for model building would be to do a
simulation - re-estimation study. Take a model and some data, simulate and
re-estimate N times and investigate whether the frequency of failed COV step
change with the amount of data (or size of the model). If it is of no value,
the fraction of failed COV steps should be independent of amount of data
(and model size). Although I haven't done it, my guess is that there will be
a relation.

>
> All models are wrong and I see no reason why the exponential error
> model would be different although I think it is better than the
> proportional error for most situations. It seems that you assume that
> whenever TBS is used, only an additive error (on the transformed
> scale) is used. Is that why you say it is wrong? Or is it because you
> believe in negative concentrations?
>

All models are wrong, of course. But some are more wrong than others.

Real measurement systems always have some kind of a random additive
error ('baseline noise'). This means that a measurement of true zero
with such a system will be distributed around zero -- sometimes negative
and sometimes positive. If you talk to chemical analysts and push them
to be honest then they will admit that negative measurements are indeed
possible. Please note the difference between the true concentration
(which can be zero but not negative) and measurements of the true
concentration which can be negative.

A residual error model that is *only* exponential does not allow the
description of negative concentration measurements. This is the same as
having *only* an additive error model on the log transformed scale.

An additive model (or a proportional model which is just a scaled
additive model) on the untransformed scale can describe the residual
error associated with negative measurements.

Optimal designs based on the results of using only an exponential
residual error model will not give sensible designs because the highest
precision is at concentration approaching zero and thus approaching
infinite time after the dose.

> Why would you not be able to get sensible information from models that
> don't have an additive error component? (You can of course have a
> residual error magnitude that increases with decreasing concentrations
> without having to have an additive error; this regardless of whether
> you use the untransformed or transformed scale).
You can, of course, get information from models that ignore the additive
residual error. Indeed the additive residual error may well be quite
negligible for describing data. If all you are going to do is to
describe the past then the model may be adequate. But without some
additional component in the residual error it will not be possible to
find an optimal design using the methods I have seen (e.g. WinPOPT).

<< Chemists, however pushed, would never report negative concentrations, not
for past studies, not for future studies. The methods they use don't even
report them. Thus if you design a study believing that you would get
reported negative concentrations, I think you're designing a sub-optimal
study (of course all designs are sub-optimal just like all models are
wrong). If you want to have error models that predict negative
concentrations, then you are not describing the residual error process
appropriately. We have to take account of the real error generating process,
not the one we would choose if we were running the show. Therefore, for
observations that you won't get, you shouldn't assume that you will get
them. In all assays I've seen there is a limit below which you will not get
a concentration measurement. You shouldn't assume that you will get a
measurement, rather assume that you will get answer that the concentration
is lower than X and design your study accordingly.


Best wishes,

Nick



Mats Karlsson wrote:
>
> Nick,
>
> Pls see below.
>
> Best regards,
>
> Mats
>
> Mats Karlsson, PhD
>
> Professor of Pharmacometrics
>
> Dept of Pharmaceutical Biosciences
>
> Uppsala University
>
> Box 591
>
> 751 24 Uppsala Sweden
>
> phone: +46 18 4714105
>
> fax: +46 18 471 4003
>
> *From:* owner-nmusers
> [mailto:owner-nmusers
> *Sent:* Sunday, August 23, 2009 11:02 PM
> *To:* Leonid Gibiansky
> *Cc:* nmusers
> *Subject:* Re: [NMusers] Linear VS LTBS
>
> Leonid,
>
> This is what I wanted to bring to the attention of nmusers:
>
> "Of course, I agree that overparameterisation could be a cause of
> convergence problems but I would not agree that this is often the
> reason. "
>
> If you can provide some evidence that over-paramerization is **often*
> *the cause of convergence problems then I will be happy to consider it.
>
> What kind of evidence did you have in mind?
>
>
> My experience with NM7 beta has not convinced me that the new methods
> are helpful compared to FOCE. They require much longer run times and
> currently mysterious tuning parameters to do anything useful.
>
> Truly exponential error is never the truth. This is a model that is
> wrong and IMHO not useful. You cannot get sensible optimal designs
> from models that do not have an additive error component.
>
> All models are wrong and I see no reason why the exponential error
> model would be different although I think it is better than the
> proportional error for most situations. It seems that you assume that
> whenever TBS is used, only an additive error (on the transformed
> scale) is used. Is that why you say it is wrong? Or is it because you
> believe in negative concentrations?
>
> Why would you not be able to get sensible information from models that
> don't have an additive error component? (You can of course have a
> residual error magnitude that increases with decreasing concentrations
> without having to have an additive error; this regardless of whether
> you use the untransformed or transformed scale).
>
>
> Nick
>
> Leonid Gibiansky wrote:
>
> Hi Nick,
>
> You are once again ignoring the actual evidence that NONMEM VI will
> fail to converge or not complete the covariance step for
> over-parametrized problems :)
>
> Sure, there are cases when it doesn't converge even if the model is
> reasonable, but it does not mean that we should ignore these warning
> signs of possible ill-parameterization. I think that the group is
> already tired of our once-a-year discussions on the topic, so, let's
> just agree to disagree one more time :)
>
> Nonmem VII unlike earlier versions will provide you with the standard
> errors even for non-converging problems. Also, you will always be able
> to use Bayesian or SAEM, and never worry about convergence, just stop
> it at any point and do VPC to confirm that the model is good :)
>
> Yes, indeed, I observed that FOCEI with non-transformed variables was
> always or nearly always equivalent to FOCEI in log-transformed
> variables. Still, truly exponential error cannot be described in
> original variables, so I usually try both in the first several models,
> and then decide which of them to use fro model development.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com <http://www.quantpharm.com>
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Nick Holford wrote:
>
> Leonid,
>
> You are once again ignoring the actual evidence that NONMEM VI will
> fail to converge or not complete the covariance step more or less at
> random. If you bootstrap simulated data in which the model is known
> and not overparameterised it has been shown repeatedly that NONMEM VI
> will sometimes converge and do the covariance step and sometimes fail
> to converge.
>
> Of course, I agree that overparameterisation could be a cause of
> convergence problems but I would not agree that this is often the reason.
>
> Bob Bauer has made efforts in NONMEM 7 to try to fix the random
> termination behaviour and covariance step problems by providing
> additional control over numerical tolerances. It remains to be seen by
> direct experiment if NONMEM 7 is indeed less random than NONMEM VI.
>
> BTW in this discussion about LTBS I think it is important to point out
> that the only systematic study I know of comparing LTBS with
> untransformed models was the one you reported at the 2008 PAGE meeting
> (www.page-meeting.org/?abstract=1268
> <http://www.page-meeting.org/?abstract=1268>). My understanding of
> your results was that there was no clear advantage of LTBS if INTER
> was used with non-transformed data:
> "Models with exponential residual error presented in the
> log-transformed variables
> performed similar to the ones fitted in original variables with INTER
> option. For problems with
> residual variability exceeding 40%, use of INTER option or
> log-transformation was necessary to
> obtain unbiased estimates of inter- and intra-subject variability."
>
> Do you know of any other systematic studies comparing LTBS with no
> transformation?
>
> Nick
>
> Leonid Gibiansky wrote:
>
> Neil
> Large RSE, inability to converge, failure of the covariance step are
> often caused by the over-parametrization of the model. If you already
> have bootstrap, look at the scatter-plot matrix of parameters versus
> parameters (THATA1 vs THETA2, .., THETA1 vs OMEGA1, ...), these are
> very informative plots. If you have over-parametrization on the
> population level, it will be seen in these plots as strong
> correlations of the parameter estimates.
>
> Also, look on plots of ETAs vs ETAs. If you see strong correlation
> (close to 1) there, it may indicate over-parametrization on the
> individual level (too many ETAs in the model).
>
> For random effect with a very large RSE on the variance, I would try
> to remove it and see what happens with the model: often, this (high
> RSE) is the indication that the error effect is not needed.
>
> Also, try combined error model (on log-transformed variables):
>
> W1=SQRT(THETA(...)/IPRED**2+THETA(...))
> Y = LOG(IPRED) + W1*EPS(1)
>
>
> $SIGMA
> 1 FIXED
>
>
> Why concentrations were on LOQ? Was it because BQLs were inserted as
> LOQ? Then this is not a good idea.
> Thanks
> Leonid
>
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com <http://www.quantpharm.com>
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Indranil Bhattacharya wrote:
>
> Hi Joachim, thanks for your suggestions/comments.
>
> When using LTBS I had used a different error model and the error block
> is shown below
> $ERROR
> IPRED = -5
> IF (F.GT.0) IPRED = LOG(F) ;log transforming predicition
> IRES=DV-IPRED
> W=1
> IWRES=IRES/W ;Uniform Weighting
> Y = IPRED + ERR(1)
>
> I also performed bootsrap on both LTBS and non-LTBS models and the
> non-LTBS CI were much more tighter and the precision was greater than
> non-LTBS.
> I think the problem plausibly is with the fact that when fitting the
> non-transformed data I have used the proportional + additive model
> while using LTBS the exponential model (which converts to additional
> model due to LTBS) was used. The extra additive component also may be
> more important in the non-LTBS model as for some subjects the
> concentrations were right on LOQ.
>
> I tried the dual error model for LTBS but does not provide a CV%. So I
> am currently running a bootstrap to get the CI when using the dual
> error model with LTBS.
>
> Neil
>
> On Fri, Aug 21, 2009 at 3:01 AM, Grevel, Joachim
> <Joachim.Grevel
> <mailto:Joachim.Grevel
> <mailto:Joachim.Grevel
>
> Hi Neil,
> 1. When data are log-transformed the $ERROR block has to change:
> additive error becomes true exponential error which cannot be
> achieved without log-transformation (Nick, correct me if I am wrong).
> 2. Error cannot "go away". You claim your structural model (THs)
> remained unchanged. Therefore the "amount" of error will remain the
> same as well. If you reduce BSV you may have to "pay" for it with
> increased residual variability.
> 3. Confidence intervals of ETAs based on standard errors produced
> during the covariance step are unreliable (many threads in NMusers).
> Do bootstrap to obtain more reliable C.I..
> These are my five cents worth of thought in the early morning,
> Good luck,
> Joachim
>
> ------------------------------------------------------------------------
>
> AstraZeneca UK Limited is a company incorporated in England and
> Wales with registered number: 03674842 and a registered office at 15
> Stanhope Gate, London W1K 1LN.
>
> *Confidentiality Notice: *This message is private and may contain
> confidential, proprietary and legally privileged information. If you
> have received this message in error, please notify us and remove it
> from your system and note that you must not copy, distribute or take
> any action in reliance on it. Any unauthorised use or disclosure of
> the contents of this message is not permitted and may be unlawful.
>
> *Disclaimer:* Email messages may be subject to delays, interception,
> non-delivery and unauthorised alterations. Therefore, information
> expressed in this message is not given or endorsed by AstraZeneca UK
> Limited unless otherwise notified by an authorised representative
> independent of this message. No contractual relationship is created
> by this message by any person unless specifically indicated by
> agreement in writing other than email.
>
> *Monitoring: *AstraZeneca UK Limited may monitor email traffic data
> and content for the purposes of the prevention and detection of
> crime, ensuring the security of our computer systems and checking
> compliance with our Code of Conduct and policies.
>
> -----Original Message-----
>
>
> *From:* owner-nmusers
> <mailto:owner-nmusers
> <mailto:owner-nmusers
> [mailto:owner-nmusers
> <mailto:owner-nmusers
> Bhattacharya
> *Sent:* 20 August 2009 17:07
> *To:* nmusers
> <mailto:nmusers
> *Subject:* [NMusers] Linear VS LTBS
>
> Hi, while data fitting using NONMEM on a regular PK data set
> and its log transformed version I made the following observations
> - PK parameters (thetas) were generally similar between
> regular and when using LTBS.
> -ETA on CL was similar
> -ETA on Vc was different between the two runs.
> - Sigma was higher in LTBS (51%) than linear (33%)
> Now using LTBS, I would have expected to see the ETAs unchanged
> or actually decrease and accordingly I observed that the eta
> values decreased showing less BSV. However the %RSE for ETA on
> VC changed from 40% (linear) to 350% (LTBS) and further the
> lower 95% CI bound has a negative number for ETA on Vc (-0.087).
> What would be the explanation behind the above observations
> regarding increased %RSE using LTBS and a negative lower bound
> for ETA on Vc? Can a negative lower bound in ETA be considered
> as zero?
> Also why would the residual vriability increase when using LTBS?
> Please note that the PK is multiexponential (may be this is
> responsible).
> Thanks.
> Neil
>
> -- Indranil Bhattacharya
>
>
>
>
> --
> Indranil Bhattacharya
>
>
>
> --
> Nick Holford, Professor Clinical Pharmacology
> Dept Pharmacology & Clinical Pharmacology
> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
> n.holford
tel:+64(9)923-6730 fax:+64(9)373-7090
> mobile: +64 21 46 23 53
> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holford
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Received on Sun Aug 23 2009 - 22:01:43 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.