NONMEM Users Network Archive

Hosted by Cognigen

Re: algorithm limits

From: Leonid Gibiansky <LGibiansky>
Date: Sat, 19 Jul 2008 21:37:24 -0400

Mark,
The description that you gave confirms that population model has limited
value unless four parameters (baseline, percent change, time to drop and
time to recovery) correlate somehow. If not, your data tells you that
the biomarker may start from very small or very large values, decrease
to zero or not decrease at all, and recover in a week or in a year.
Moreover, as I understood, there is no central tendency there: any
baseline, drop, time to decrease and time to recovery are independent
and equally-probable (otherwise, you would have reasonable OMEGAs with
the bell-shaped rather than flat distribution of random effects. Sparse
sampling will not work in this case, and if you have dense sampling, you
may just use two-stage to describe observed (uniform?) distribution of
individual parameters (and correlations if there are any).

Leonid

--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566




Mark Sale - Next Level Solutions wrote:
>
> Leonid,
> This isn't PK, and the model show basically the right shape, and the
> data suggest reasonable residual error (the biological marker falls from
> a value between 5 and 310000, to somewhere between 0 and no change from
> baseline, over a course of a couple of hours to a couple of weeks, then
> recovers somewhere between a 100 hours and 9000 hours later.)
> ie., it start at a highly variable level fall by some highly variable
> fraction, over some variable lenghth of time and recovers somewhere
> between about a week and about a year.
> But, within those limits, it appears pretty well behaved.
>
>
> Mark Sale MD
> Next Level Solutions, LLC
> www.NextLevelSolns.com <http://www.NextLevelSolns.com>
> 919-846-9185
>
> -------- Original Message --------
> Subject: Re: [NMusers] algorithm limits
> From: Leonid Gibiansky <LGibiansky
> Date: Sat, July 19, 2008 5:36 pm
> To: Mark Sale - Next Level Solutions <mark
> Cc: nmusers
>
> Hi Mark,
>
> If you really have 10,000 fold differences in, say, volume or
> bioavailability, population model does not make any sense: individual
> parameters have uninformative priors; they are defined by the
> individual
> data only, no meaningful predictions can be made for the next patient.
> So, if you need data description, you can directly see whether the
> method provides you with the correct line, but you cannot count on
> prediction: they can be anywhere.
>
> For the estimation procedure, my understanding is that large OMEGAs
> will
> discount population model influence on the individual fit, and in this
> respect, the method will give you the correct answer (individual
> parameters controlled by the individual data only). This is how you
> trick nonmem into the individual model fit: assign huge OMEGAs. Whether
> your true OMEGA value is 50 or 150 is more or less irrelevant: both
> values are huge and do not provide informative priors for the
> individual
> parameters.
>
> Sometimes you get huge OMEGAs if there is a strong correlation between
> parameters, so that combination of ETAs is finite while each of them
> individually can be anywhere. Removal of some random effects can
> help in
> this case. Sometimes large OMEGAs are indicative of multivariate
> distributions (or strong categorical covariate effects): this will be
> seen on ETA distributions histograms or ETAs vs covariates plots.
>
> Overall, I think you have problems with the model or data rather than
> with the estimation method failure.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com <http://www.quantpharm.com>
> e-mail: LGibiansky at quantpharm.com <http://quantpharm.com>
> tel: (301) 767 5566
>
>
>
>
> Mark Sale - Next Level Solutions wrote:
> >
> > General question:
> > What are practical limits on the magnitude of OMEGA that is
> compatible
> > with the FO and FOCE/I method? I seem to recall Stuart at one time
> > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was about the
> > limit at which the Taylor expansion can be considered a reasonable
> > approximation of the real distribution. What about FOCE-I?
> > I'm asking because I have a model that has an OMEGA of 13,
> exponential
> > (and sometime 100) FOCE-I, and it seems to be very poorly behaved in
> > spite of overall, reasoable looking data (i.e., the structural model
> > traces a line that looks like the data, but some people are WAY
> above
> > the line and some are WAY below, and some rise MUCH faster, and some
> > rise MUCH later, by way I mean >10,000 fold, but residual error
> looks
> > not too bad). Looking at the raw data, I believe that the the
> > variability is at least this large. Can I beleive that NONMEM FOCE
> > (FO?) will behave reasonably?
> > thanks
> > Mark
> >
>
Received on Sat Jul 19 2008 - 21:37:24 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.