NONMEM Users Network Archive

Hosted by Cognigen


From: Matt Hutmacher <matt.hutmacher>
Date: Tue, 15 Dec 2009 14:43:34 -0500

Hi Pavel,


VAS data or any data that come from a bounded range, the data can appear J-
or even U-shaped, because some of the response accumulate on the boundaries.
This can be problematic for estimation using the normal distribution (the
way we typically model in NONMEM). There are various ways to approach this
problem. I have listed some in the order of complexity in implementation.
In these, let Z = VAS/10 so that Z is 0, or 1, or any continuous value in


First, for Z = 0 let Z* = 0+A and Z=1 let Z*=1-A where A is very small
positive value. For Z not equal to 0,1, Z*=Z. Then let the new response DV
= LOGIT(Z*). This DV ranges between -inf and inf. This might facilitate a
model like Y = F +EPS(1) in NONMEM - additive error might be ok. Also, A
should be small enough so that Z* for Z = 0 is less than Z* for all Z >0.
The problem with this method is how to choose A. Its choice is subjective
and could adversely influence the estimates. A sensitivity analysis to the
choice of A could be used to mitigate this issue. I have just learned that
this approach has been previously published - see Molas M and Lesaffre E, A
comparison of three random effects approaches to analyze repeated bounded
outcome scores with an application in a stroke revalidation study, Stats in
Med., 2008; 27:6612-6633.


Second, you could take Z* = Z + A/(Z+A+B) where A and B are positive values.
Then you can take LOGIT(Z*) as above. This is a bit different than above
because it ensures that Z* for Z=0 is less than Z* for Z>0 - essentially all
the data get translated and not just the boundary. This suffers from the
subjectivity of the choice of A and B as well. This method is based on the
Box-Cox concept of transformation with shift parameters.


Third, and perhaps the most principled, one could let DV=LOGIT(Z) for all Z
not = 0,1. Then there are some options for how to handle Z=0,1. Basically,
these options are as suggest by Beal SL, Ways to fit a PK model with some
data below the quantification limit, J Pharmacokinet Pharmacodyn. 2001
Oct;28(5):481-504. You can omit and ignore them, truncate the distribution
to exclude them, or include them as censored observations. If you treat
them as censored, then for Z=0 you can treat this as an observation that is
BQL, where the BQL is the smallest Z not equal to Z. In a symmetric way,
Z=1 can be handled, such as the '1' is above a quantification limit. You
will need to construct the likelihood for the censoring. The issue here is
implementation. Omitting them is the easiest, but if your drug has benefit
and is increasing the number of 0's let's say in the dataset, then you
probably do not want to through them out. Including them as censored
observations in NONMEM is a bit tricky, because there is no real cumulative
normal distribution function which handle etas (to my knowledge). An
approximation (Abromawitz and Stegun) is available, but very messy. A more
simple way to code the problem is to use the logistic distribution for the
censored observations. The logistic can be compared to a t-distribution
with 9 degrees of freedom. Note this is an assumption and sensitivity to it
is a good thing to do, but difficult to implement in this case. This
approach requires the use of the LAPLACE LIKE or -2LL feature.


Also, one could even hypothesize using a different transformation than logit
in the first, second, and third scenarios. For example, the complementary
log-log (DV = LOG(-LOG(1-Z)) could be used (note that Z* is used for the
first and second method above). A transformation could even be estimated
from the data and that transformation is derived from a family that includes
the logistic and complementary log-log as special cases. This
transformation is DV = LOG[ {(1-Z*)^(-C)-1}/C] where C=1 yields the logit
and C=0 is the complementary log-log. This transformation will provide even
greater flexibility for handling difficult distributions and hopefully will
promote normality at least at the individual level. I would not suggest the
"transform both sides" method here as there is no a priori,
pharmacologically based model that can be posited for modeling the data on
the VAS scale. Also, the transformation might allow for entering the random
effects into the model in a linear fashion which has nice properties.


The second and third options with the transformation are the focus of
Estimating Transformations for Population Models of Continuous, Closed
Interval Data, Matthew M. Hutmacher and Jonathan L. French (thanks Leonid). I
need to upload the presentation - sorry the abstract does not help with

The first 2 methods provide residuals for all the data, the third will not
provide residuals for the 0's and 1's. So be careful when evaluating
residuals from any of the approaches, because the residuals are somewhat
artificial in the first and second cases. VPC and PPCs will help in the
evaluation of the models.


Here is a snippet of NONMEM code on how to implement the third method and
estimate the transformation. Hope this helps.

Best and please all have a happy holiday season,



Code for a simple model (note that is written for illustration and has not
been tested).




MU = THETA(1)+ETA(1)+(THETA(2)+ETA(2))*TIME+(THETA(3)+ETA(3))*DOSE ;***Mu
is the model and is on the transformed scale (-inf,inf)***;

C = THETA(4) ;***Transformation parameter***;

W = SQRT(THETA(5)**2) ;***Standard deviation of residuals***;


$ERROR (if $PK)

Z = VAS/10 ;*** VAS is DV on dataset***;

X = 1-Z


H=LOG(U) ;***is the VAS data on the transformed scale***;

JX =((1/U)*(X**(-C-1))*(1/10))



J = QJX*JX ;***Absolute value of the Jacobian, which is used to calculate
the objective function value on the VAS scale***;


IWRES=(H-MU)/W ;***individual weighted residual on the transformed

HMN= LOG(((1-MNVS/10)**(-C)-1)/C) ;***MNVS is the smallest VAS.NE.0: HMN is
it transformed***;

HMX= LOG(((1-MXVS/10)**(-C)-1)/C) ;*** MXVS is largest VAS.NE.1: HMX is it










LGL1=-LOG(1+EXP(-XRS1)) ; ***log-likelihood for VAS=0; LOG(1/(1+EXP(-XRS1))
using the logistic distribution***;



LGL2=-XRS2-LOG(1+EXP(-XRS2)) ; ***log-likelihood for VAS=1;
LOG(1-1/(1+EXP(-XRS2))) using the logistic distribution ***;


LGL3 = -0.5*LOG(W*W)-0.5*(IWRES**2)+LOG(J) ;***Log likelihood for VAS ne 0
or 1***;















From: owner-nmusers
Behalf Of nonmem
Sent: Tuesday, December 15, 2009 6:41 AM
To: Leonid Gibiansky
Cc: nmusers
Subject: Re: [NMusers] $ERROR and LOGIT




This is about visual analog scale. There are a lot of 0 and 1 values
(actually, VAS changes from 0 to 10 in this case, but it can be divided by
10). There are articles, presentatione and dissertations which use logit.
So, I try diffrent transformations including logit.


CV error works OK, but I still try to take care of the skewed distribution.


When I use exponential error, NONMEM transforms it into CV error. Later,
simulations do not make sense because NONMEM does not do the same. Exactly
as described in the nonmem6 manual.


Thank you for the article. I'll keep digging.





----- Original Message -----
From: Leonid Gibiansky
Date: Tuesday, December 15, 2009 1:01 am
Subject: Re: [NMusers] $ERROR and LOGIT
To: nonmem
Cc: nmusers

> Pavel,
> I am not sure what is the problem with the log-transformation of
> the
> data. log(x) = infinity only if x = infinity, do you have
> infinite
> observations in your data set? If not, then transformed data
> cannot be
> equal to infinity.
> log(x) = - infinity only if x=0
> do you have BQL observations coded as zeros? If so, you cannot
> use
> exponential error model. But you can either exclude BQLs (and
> use
> log-transformation) or treat them as BQLs (and still use
> log-transformation).
> Looks like your prediction F is between 0 and 1. I do not think
> that
> exponential error is appropriate for this type of data. Could
> you
> elaborate what exactly you are modeling? If this is indeed
> interval
> data, this poster can be relevant (Estimating Transformations
> for
> Population Models of Continuous, Closed Interval Data, Matthew
> M.
> Hutmacher and Jonathan L. French):
> Thanks
> Leonid
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web:
> e-mail: LGibiansky at
> tel: (301) 767 5566
> nonmem
> >
> > Hello,
> >
> > NONMEM has the following property related to intra-subject
> variability:>
> > "During estimation by the first-order method, the exponential
> model and
> > proportional models give identical results, i.e., NONMEM
> cannot
> > distinguish between them." So, NONMEM transforms
> F*DEXP(ERR(1)) into F
> > + F*ERR(1).
> >
> > Is there an easy around it? / /I try to code the logit
> transformation.
> > I cannot log-transform the original data as it is suggested in
> some
> > publications including the presentation by Plan and Karlsson
> (Uppsala)
> > because many values will be equal to plus or minus infinity.
> Will
> > NONMEM "linearize" the following code:
> >
> > Z = DLOG((F+THETA(10))/(1-F+THETA(10)))
> > Y = DEXP(Z + ERR(1))/(1 + DEXP(Z + ERR(1)))
> >
> >
> >
> > Thanks!
> >
> > Pavel
> >
> >
> >

Received on Tue Dec 15 2009 - 14:43:34 EST

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to:

Once subscribed, you may contribute to the discussion by emailing: