NONMEM Users Network Archive

Hosted by Cognigen

RE: FW: PPC

From: Matt Hutmacher <matt.hutmacher>
Date: Mon, 28 Jul 2008 11:58:26 -0400

Hi Nick,

The log-transform I discussed was just a simple example for a parameter
bounded below by 0 (similar to CL which is generally considered lognormally
distributed between individuals). Constraints on other parameters can be
accommodated as well, such as the logit. I look forward to a publication
that details the risks/benefits for permitting lack of convergence in the
bootstrap that we can cite in reports. Citing discussion's on nmuser's is
difficult. I do agree the bootstrap is quite useful, especially if you
don't trust the LRT. I still think it is good to show without a $COV step
that the estimates were achieved at a minimum and not a saddle point.

For the renal example, if we did not have the correct number in each CLcr
group and it influenced CL, then the CI might be too wide since the span of
CLcr used to support the estimate of the CLcr covariate parameter would not
be constrained to be wide enough (this is similar to Stephen Duffull
statement recently that often the covariate distribution is not of
sufficient span to have adequate power). Therefore, an LRT and the CI might
not show the same signal. Depending on trusting the LRT, one might conclude
that less information is known about the CLcr-CL relationship. While larger
than nominal coverage is ok with respect to the CI statement, inefficient
use of information is expensive. I am assuming an adequate sample size for
a reasonable COV step estimate and that the subjects are densely sampled
enough to have FOCE adequately approximate the true likelihood. The latter
can be reconciled by other methods however.

Kind regards,
Matt



-----Original Message-----
From: owner-nmusers
Behalf Of Nick Holford
Sent: Friday, July 25, 2008 5:14 PM
To: nmusers
Subject: Re: FW: [NMusers] PPC

Matt,

Thanks for your comments which I almost completely agree with.

You propose to log transform the parameters so that the resulting
unlogged uncertainty will be skewed. But if this does not mean you will
get a better picture of the uncertainty. If the 'true' parameter
uncertainty is left skewed the log transformation will force some kind
of right skewness which would not be correct.

The issue of NONMEM bootstrap success rates and confidence intervals has
been discussed at length on nmusers.
http://www.cognigencorp.com/nonmem/nm/99jul292006.html - search the
thread for "slim evidence"
http://www.cognigencorp.com/nonmem/nm/99jul152003.html -- search the
thread for "assess imprecision"
Based on experimental evidence with real and simulated data sets it
makes negligible difference to the bootstrap confidence intervals if
NONMEM converges and runs the covariance step or if NONMEM terminates
with rounding errors. What is more certain is that CI's based on the
assumption of normally distributed uncertainty and asymptotic SEs will
have the wrong coverage if the true uncertainty is not symmetrical (a
common finding for non-linear model parameters).

I agree that simple bootstrapping can cause problems as you have
outlined but it is a helpful tool when NONMEM refuses to run the
covariance step and you want to get some feel for parameter uncertainty.
If you took your example of a small renal impairment vs normal study
what difference do you think there would be in the 90% CI for clearance
based on a naive bootstrap versus some other better constructed procedure?

Best wishes,

Nick
 
Matt Hutmacher wrote:
> Hello all,
> I look forward to seeing the tutorial on the web as well.
>
> I have seen comments that some modelers prefer the non-parametric
bootstrap
> to the $COV step because it captures skewed distributions. For reasonable
> sample sizes, the uncertainty distributions should be normal, and in my
> experience, for stable and good fitting models, the results between the
> non-parametric bootstrap and the $COV step are highly similar. When
sample
> sizes are smaller, or a parameter is not well estimated because of the
> design - ED50 quickly comes to mind - the nonparametric bootstrap might
show
> skewness. In this case, the $COV step uncertainty distribution can be
> improved by re-parameterizing from ED50=THETA(X) to ED50=EXP(THETA(X)).
> Note that this parameterization does not need any boundary constraints (in
> $THETA) as well. Maximum likelihood is invariant to this these changes
and
> so the same objective function and fit (given a stable model) should be
> achieved. The uncertainty of THETA(X), for example THETA(X) +/-
2*STANDARD
> ERROR (THETA(X)) translates into an ED50 interval of EXP(THETA(X) +/-
> 2*STANDARD ERROR (THETA(X)), which is skewed.
>
> I have seen the nonparametric bootstrap used without thought to how it
> should be implemented given the designs and structures of the data. For
> example, consider a single dose study with n=6 per group and a study to
> assess exposure stratified by CLcr groupings (ie kidney function) with n=8
> per group. Because the number in the dose and CLcr groups are fixed by
> design, the nonparametric sampling procedure should sample with
replacement
> within groups to achieve the fixed number of patients per group by design,
> that is n=6 or n=8. If this is not done, then dose and CLcr are
> conceptually random with respect to the bootstrap and a sampled data set
> could be imbalanced relative to the original designs. These imbalances
will
> influence the estimated uncertainty distribution and could bias the
results.
> One can see that this can get complicated quickly to do it right. Another
> example would be fitting an Emax model to a biomarker measured over a set
of
> 5 distinct, fixed concentrations, each replicated n=10 times. If we
sample
> without regard to the fixed nature of the design, we may fail to get many
of
> the Emax models to converge, which is unrealistic. This leads to
> convergence, which can be another issue. How do I justify in my report
> that my 90% confidence intervals are reasonable if only 80% of my
bootstraps
> converge? Additionally, if the $COV step does not converge and a modeler
> uses the nonparametric bootstrap to estimate uncertainty, how does the
> modeler demonstrate the estimates achieved a minimum OFV and not at a
saddle
> point? The $COV step provides this check automatically.
> I do not proclaim that the $COV step is perfect, only that it is a useful
> and valuable tool in modeling, and that the bootstrap should not be used
> without thought.
>
> To be fair, a drawback to the $COV uncertainty distributions is that
> non-positive definite OMEGA matrices can still be sampled, which are
> invalid. However, the same parameterization trick as used above can be
> implemented to mitigate some of this behavior. Instead of parameterizing
> CL=THETA(1)*EXP(ETA(1)) and estimating the variance of ETA(1) in $OMEGA,
> this model can be re-parameterized as
> CL=EXP(THETA(1))*EXP(EXP(THETA(2))*ETA(2)). In this case the variance of
> ETA(1) is set to 1 in $OMEGA and EXP (THETA(2)) provides its estimate.
This
> will bound the variance component away from 0 and give the uncertainty
> distribution some skewness. Correlations between variance components can
> also be forced between -1 and 1 by re-parameterization, but this is more
> complicated.
>
> Matt
>
> -----Original Message-----
> From: owner-nmusers
On
> Behalf Of Nick Holford
> Sent: Friday, July 25, 2008 2:12 AM
> To: nmusers
> Subject: Re: FW: [NMusers] PPC
>
> Mahesh,
>
> Thanks for your further info on VPC and PPC. I agree that the bootstrap
> distribution of the parameters is probably better than the asymptotic
> normal distribution implied by NONMEM's covariance step results.
>
> I dont have your experience of comparing VPC and PPC so I hope you can
> find a way to publish these results which are similar to the limited
> exploration reported by Yano et al.
>
> VPC is not the perfect answer for model evaluation but it has some
> useful properties compared with the traditional methods (standard
> horizontal residual plots and diagonal residual plots (DV vs PRED and
> IPRED). I certainly havent seen any reason to use a PPC for model
> evaluation. It does however have a value (in theory) for predicting the
> uncertainty in outcome of a future trial.
>
> Nick
>
> Samtani, Mahesh [PRDUS] wrote:
>
>> Dear Nick,
>> Thank-you for teaching these important concepts. Could you and others
>>
> kindly comment on the following 2 aspects:
>
>> a) The variance-covariance matrix based on the estimated standard errors
>>
> and their correlation will generate a multi-variate normal distribution
for
> the parameters. However, the posterior distribution of parameters may not
be
> normally dispersed. Wouldn't it be better to use the bootstrap results as
a
> source for getting the uncertainty distribution. I have to admit that the
> bootstrap method can be quite time-consuming. See one such example at:
>
>
http://www.page-meeting.org/pdf_assets/2373-MSamtani%20PAGE%20Poster%202007.
> pdf
>
>> b) More importantly, after going through the PPC and VPC comparison for
>>
> several cases I always find that if the parameter estimates have
reasonable
> precision from the original NONMEM run then the PPC and VPC results are
> essentially identical. This echoes an earlier comment that most of the
> variation is explained by BSV and RV. Has any one else experienced this
> behavior also and if so shouldn't VPC be enough for model verification?
>
>> Kindly advise...Mahesh
>>
>> -----Original Message-----
>> From: owner-nmusers
>> [mailto:owner-nmusers
>> Sent: Wednesday, July 23, 2008 8:38 AM
>> To: Nick Holford; nmusers
>> Subject: RE: FW: [NMusers] PPC
>>
>>
>> Hi Nick,
>>
>> I have been following this discussion and I think it is very helpful to
>> many of us. Can you please elaborate on that last part about binning?
>> What is that for? I must have missed something there.
>>
>> Thanks,
>> Susan
>>
>> Susan Willavize, Ph.D.
>> Global Pharmacometrics Group
>> 860-732-6428
>>
>> This e-mail is classified as Pfizer Confidential; it is confidential and
>> privileged.
>>
>>
>> -----Original Message-----
>> From: owner-nmusers
>> On Behalf Of Nick Holford
>> Sent: Wednesday, July 23, 2008 6:32 AM
>> To: nmusers
>> Subject: Re: FW: [NMusers] PPC
>>
>> Paul,
>>
>> The procedure you describe is a way of producing a posterior predictive
>> check but I don't know of any good examples of its use. A simpler way of
>>
>> doing a PPC samples the population parameter estimates from a
>> distribution centered on the final estimates with a variance-covariance
>>
>> based on the estimated standard errors and their correlation. VPCs are
>> not posterior predictive checks because they do not take account of the
>> posterior distribution of the parameter estimates (i.e. the final
>> estimates with their uncertainty). A VPC typically ignores the parameter
>>
>> uncertainty and uses what has been called the degenerate posterior
>> distribution (See Yano Y, Beal SL, Sheiner LB. Evaluating
>> pharmacokinetic/pharmacodynamic models using the posterior predictive
>> check. J Pharmacokinet Pharmacodyn. 2001;28(2):171-92 for terminology,
>> methods and examples).
>>
>> When I spoke of uncertainty I did not mean random variability (OMEGA and
>>
>> SIGMA). A VPC will simulate observations using the final THETA, OMEGA
>> and SIGMA estimates.
>>
>> You can calculate distribution statistics for your observations (such as
>>
>> median and 90% intervals) by combining the observations (one per
>> individual) at each time point to create an empirical distribution. The
>> statistics are then determined from this empirical distribution. In
>> order to get sufficient numbers of points (at least 10 is desirable) you
>>
>> may need to bin observations into time intervals e.g. 0-30 mins, 30-60
>> mins etc.
>>
>> Nick
>>
>> Paul Matthew Westwood wrote:
>>
>>
>>> ________________________________________
>>> From: Paul Matthew Westwood
>>> Sent: 22 July 2008 13:20
>>> To: Nick Holford
>>> Subject: RE: [NMusers] PPC
>>>
>>> Nick,
>>>
>>> Thanks for your reply and apologies once again for another confusing
>>>
>>>
>> email. I think I am using VPC, which as I understand it is simulating n
>> datasets using the final parameter estimates gained from the final
>> model, and then taking the median and 90% confidence interval (for
>> example) for each simulated concentration and comparing these to the
>> real concentrations. Whereas, PPC is where you then run the final model
>> through the simulated datasets and compare selected statistics of these
>> new runs with the original. Is this correct? You mentioned including
>> uncertainty on the parameter estimates in the simulated datasets. Would
>> one usually not include uncertainty (fixing the error terms to zero) in
>> the simulated datasets? Doing this with mine obviously produced much
>> better concentrations with no negative values and no 'significant'
>> outliers. Another thing you mentioned is comparing the median of the
>> simulated concentrations with the median of the original dataset
>> concentrations, but as there is only one sample for any particular time
>> point would this indicate the unsuitability of VPC (and furthermore PPC)
>> for this model?
>>
>>
>>> Thanks again,
>>> Paul.
>>> ________________________________________
>>> From: owner-nmusers
>>>
>>>
>> Behalf Of Nick Holford [n.holford
>>
>>
>>> Sent: 22 July 2008 10:30
>>> To: nmusers
>>> Subject: Re: [NMusers] PPC
>>>
>>> Paul,
>>>
>>> Its not clear to me if you did a VPC (visual predictive check) using
>>> just the final estimates of the parameters) or tried to do a posterior
>>> predictive check (PPC) including uncertainty on the parameter
>>>
>>>
>> estimates
>>
>>
>>> in the simulation.
>>>
>>> I dont have any experience with PPC but I dont think its helpful for
>>> model evaluation. Its more of a tool for understanding uncertainties
>>>
>>>
>> of
>>
>>
>>> predictions for future studies.
>>>
>>> I assume you dont have complications like informative dropout
>>>
>>>
>> processes
>>
>>
>>> to complicate the simulation so if you did a VPC and the median of the
>>> predictions doesnt match the median of the observations then your
>>>
>>>
>> model
>>
>>
>>> needs more work.
>>>
>>> Some negative concs are OK but 'impossibly high values' point to
>>> problems with your model.
>>>
>>> So I think you can safely say the VPC has worked very well -- it has
>>> told you that you need to think more about your model. You might find
>>> some ideas in these references:
>>>
>>> 1. Tod M, Jullien V, Pons G. Facilitation of drug evaluation in
>>> children by population methods and modelling. Clin Pharmacokinet.
>>> 2008;47(4):231-43.
>>> 2. Anderson BJ, Holford NH. Mechanism-Based Concepts of Size and
>>> Maturity in Pharmacokinetics. Annu Rev Pharmacol Toxicol.
>>>
>>>
>> 2008;48:303-32.
>>
>>
>>> Nick
>>>
>>> Paul Matthew Westwood wrote:
>>>
>>>
>>>
>>>> Hello all,
>>>>
>>>> I wonder if someone can give me some tips on PPC.
>>>> I am working on a midazolam dataset with a pediatric population, and
>>>>
>>>>
>> have decided to use PPC as a model validation technique. The dataset I
>> am modelling has up to 43 patients, at different ages, different
>> weights, different times of dosing and sampling, and different doses. I
>> simulated 100 datasets using NONMEM VI, fixing all parameters to the
>> final estimates from the model. The simulated datasets produced had a
>> large proportion of negative concentrations, and also a few impossibly
>> large concentration values. Also the median, 5th and 95th percentiles
>> were not very promising, and the resulting graphs not very clean.
>>
>>
>>>> Firstly, can I use PPC with any degree of confidence with a dataset
>>>>
>>>>
>> such as this, and if so, do I omit the negative concentration values
>> from the analysis?
>>
>>
>>>> Thanks in advance for any help given.
>>>>
>>>> Paul Westwood,
>>>> PhD Student,
>>>> QUB,
>>>> Belfast.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> --
>>> Nick Holford, Dept Pharmacology & Clinical Pharmacology
>>> University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
>>>
>>>
>> Zealand
>>
>>
>>> n.holford
>>> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
>>>
>>>
>>>
>>>
>>>
>>
>>
>
>

--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holford
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford


Received on Mon Jul 28 2008 - 11:58:26 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.