NONMEM Users Network Archive

Hosted by Cognigen

RE: algorithm limits

From: Mark Sale - Next Level Solutions <mark>
Date: Sat, 19 Jul 2008 18:52:00 -0700

left.letterhead
(image/png attachment: left.letterhead)

Received on Sat Jul 19 2008 - 21:52:00 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.

Thanks Leonid,
  I believe what you tell me, and I understand th= at FOCE doesn't solve the problem with the approximation that FO makes, onl= y reduces it (and possibly expands the range that the approximation is usef= ul for?).  Anyone out there with insight into what a practical limit i= s for FOCE and/or if there are any diagnostics that are helpful when you're= close to it?  Is it really 0.5 for FO?
Mark


Mark Sale M= D
Next Level Solutions, LLC
www.NextLevelSolns.com
919-846-9185

-------- Original Message --------
Subject: Re: [NMusers] algorithm limits
From: Leonid Gibiansky <LGibiansky Date: Sat, July 19, 2008 9:37 pm
To: Mark Sale - Next Level Solutions <mark Cc: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com<= br> tel: (301) 767 5566




Mark Sale - Next Level Solutions wrote:
>
> Leonid,
> This isn't PK, and the model show basically the right shape, and the =
> data suggest reasonable residual error (the biological marker falls fr= om
> a value between 5 and 310000, to somewhere between 0 and no change fro= m
> baseline, over a course of a couple of hours to a couple of weeks, the= n
> recovers somewhere between a 100 hours and 9000 hours later.)
> ie., it start at a highly variable level fall by some highly variable =
> fraction, over some variable lenghth of time and recovers somewhere > between about a week and about a year.
> But, within those limits, it appears pretty well behaved.
>
>
> Mark Sale MD
> Next Level Solutions, LLC
> www.NextLevelSolns.com <http://www.N= extLevelSolns.com>
> 919-846-9185
>
> -------- Original Message --------
> Subject: Re: [NMusers] algorithm limits
> From: Leonid Gibiansky <LGibiansky arm.com>
> Date: Sat, July 19, 2008 5:36 pm
> To: Mark Sale - Next Level Solutions <mark extlevelsolns.com>
> Cc: nmusers lick="return true;Popup.composeWindow('pcompose.php?sendto=nmusers%40gl= obomaxnm.com'); return false;" href="http://email.secureserver.net/pcompo= se.php#Compose" mce_href="http://email.secureserver.net/pcompose.php#Comp= ose">nmusers >
> Hi Mark,
>
> If you really have 10,000 fold differences in, say, volume or
> bioavailability, population model does not make any sense: individ= ual
> parameters have uninformative priors; they are defined by the
> individual
> data only, no meaningful predictions can be made for the next pati= ent.
> So, if you need data description, you can directly see whether the=
> method provides you with the correct line, but you cannot count on=
> prediction: they can be anywhere.
>
> For the estimation procedure, my understanding is that large OMEGA= s
> will
> discount population model influence on the individual fit, and in = this
> respect, the method will give you the correct answer (individual > parameters controlled by the individual data only). This is how yo= u
> trick nonmem into the individual model fit: assign huge OMEGAs. Wh= ether
> your true OMEGA value is 50 or 150 is more or less irrelevant: bot= h
> values are huge and do not provide informative priors for the
> individual
> parameters.
>
> Sometimes you get huge OMEGAs if there is a strong correlation bet= ween
> parameters, so that combination of ETAs is finite while each of th= em
> individually can be anywhere. Removal of some random effects can > help in
> this case. Sometimes large OMEGAs are indicative of multivariate > distributions (or strong categorical covariate effects): this will= be
> seen on ETA distributions histograms or ETAs vs covariates plots.<= br> >
> Overall, I think you have problems with the model or data rather t= han
> with the estimation method failure.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web:
www.quantpharm.com <http://www.quantpharm.com>
> e-mail: LGibiansky at quantpharm= .com <http://quantph= arm.com>
> tel: (301) 767 5566
>
>
>
>
> Mark Sale - Next Level Solutions wrote:
> >
> > General question:
> > What are practical limits on the magnitude of OMEGA that is<= br> > compatible
> > with the FO and FOCE/I method? I seem to recall Stuart at on= e time
> > suggesting that a CV of 0.5 (exponential OMEGA of 0.5) was a= bout the
> > limit at which the Taylor expansion can be considered a reas= onable
> > approximation of the real distribution. What about FOCE-I? > > I'm asking because I have a model that has an OMEGA of 13, > exponential
> > (and sometime 100) FOCE-I, and it seems to be very poorly be= haved in
> > spite of overall, reasoable looking data (i.e., the structur= al model
> > traces a line that looks like the data, but some people are = WAY
> above
> > the line and some are WAY below, and some rise MUCH faster, = and some
> > rise MUCH later, by way I mean >10,000 fold, but residual= error
> looks
> > not too bad). Looking at the raw data, I believe that the th= e
> > variability is at least this large. Can I beleive that NONME= M FOCE
> > (FO?) will behave reasonably?
> > thanks
> > Mark
> >
>