NONMEM Users Network Archive

Hosted by Cognigen

RE: algorithm limits

From: James G Wright <james>
Date: Tue, 22 Jul 2008 18:49:36 +0100

        
Hi Mark,

 

This is a good question. I am not aware of any public domain simulation
work in extreme variability scenarios, so my comments are based on the
theory.

 

The fundamental problem with the standard NONMEM algorithm, where the
fixed effect and random effects are estimated simultaneously by joint
maximum likelihood, is that the size of the variance parameters can bias
the mean, sometimes substantially (and hence generalized least squares
remains the standard algorithm in the statistical community). If the
variance model is even slightly misspecified (which it nearly always
is), this can be very damaging to your population mean estimate. Often
this leads to overestimates of the mean (so the variance can be smaller)
but in some circumstances you can get an excessively high CV% because
the mean is underestimated. The other common cause is that you have
parameter values close to zero in a subset of subjects, which on a
log-scale is minus infinity. Given that you are getting such a high CV%
the lognormal may not be the best approach. Switching to additive
intersubject variability would remove this dependence between mean and
variance, and I would definitely give this a try as an exploratory step.
In WinBugs or a nonparametric package, you could explore other
distributions - in NONMEM, your only option is subsetting the data
manually or using a mixture model, each of which bring new problems.

 

Linearization is a slightly different issue, as this effects how the
random effects impact the fit. FOCE linearization will probably give
you good individual fits if your individual data contain information
about all parameters (ie you could almost get away with a two-stage
approach), but this is not the same question as having reliable
population parameter estimates. From your description of the model it
sounds like you have variability parallel to the time axis, and this is
the toughest to linearize - this pushes you away from classic NONMEM as
a software choice if the problem lies in a parameter that shifts the
predicted curve horizontally in time (like a lag-time does).

 

As a rule of thumb, I would definitely be cynical about a CV over 300%,
and would be extremely cautious to use such a model for prediction. My
eyebrows start to raise at around 130%. If you decide to simulate, good
luck, and I would love to know your findings. Best regards,

 

James G Wright PhD

Scientist

Wright Dose Ltd

Tel: 44 (0) 772 5636914

 

-----Original Message-----
From: owner-nmusers
On Behalf Of Mark Sale - Next Level Solutions
Sent: 19 July 2008 21:13
Cc: nmusers
Subject: [NMusers] algorithm limits

 

General question:
  What are practical limits on the magnitude of OMEGA that is compatible
with the FO and FOCE/I method? I seem to recall Stuart at one time
suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the
limit at which the Taylor expansion can be considered a reasonable
approximation of the real distribution. What about FOCE-I?
 I'm asking because I have a model that has an OMEGA of 13, exponential
(and sometime 100) FOCE-I, and it seems to be very poorly behaved in
spite of overall, reasoable looking data (i.e., the structural model
traces a line that looks like the data, but some people are WAY above
the line and some are WAY below, and some rise MUCH faster, and some
rise MUCH later, by way I mean >10,000 fold, but residual error looks
not too bad). Looking at the raw data, I believe that the the
variability is at least this large. Can I beleive that NONMEM FOCE
(FO?) will behave reasonably?
thanks
Mark


Received on Tue Jul 22 2008 - 13:49:36 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.